00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 148 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3649 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.221 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.222 The recommended git tool is: git 00:00:00.222 using credential 00000000-0000-0000-0000-000000000002 00:00:00.224 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.247 Fetching changes from the remote Git repository 00:00:00.249 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.271 Using shallow fetch with depth 1 00:00:00.271 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.271 > git --version # timeout=10 00:00:00.296 > git --version # 'git version 2.39.2' 00:00:00.296 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.317 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.317 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.401 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.413 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.424 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.424 > git config core.sparsecheckout # timeout=10 00:00:07.435 > git read-tree -mu HEAD # timeout=10 00:00:07.451 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.468 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.468 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.628 [Pipeline] Start of Pipeline 00:00:07.639 [Pipeline] library 00:00:07.640 Loading library shm_lib@master 00:00:07.640 Library shm_lib@master is cached. Copying from home. 00:00:07.653 [Pipeline] node 00:00:07.667 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:07.668 [Pipeline] { 00:00:07.676 [Pipeline] catchError 00:00:07.677 [Pipeline] { 00:00:07.688 [Pipeline] wrap 00:00:07.696 [Pipeline] { 00:00:07.701 [Pipeline] stage 00:00:07.703 [Pipeline] { (Prologue) 00:00:07.715 [Pipeline] echo 00:00:07.716 Node: VM-host-SM0 00:00:07.721 [Pipeline] cleanWs 00:00:07.730 [WS-CLEANUP] Deleting project workspace... 00:00:07.730 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.735 [WS-CLEANUP] done 00:00:07.959 [Pipeline] setCustomBuildProperty 00:00:08.055 [Pipeline] httpRequest 00:00:08.963 [Pipeline] echo 00:00:08.965 Sorcerer 10.211.164.20 is alive 00:00:08.974 [Pipeline] retry 00:00:08.976 [Pipeline] { 00:00:08.994 [Pipeline] httpRequest 00:00:08.998 HttpMethod: GET 00:00:08.998 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.999 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.000 Response Code: HTTP/1.1 200 OK 00:00:09.001 Success: Status code 200 is in the accepted range: 200,404 00:00:09.001 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.017 [Pipeline] } 00:00:10.033 [Pipeline] // retry 00:00:10.039 [Pipeline] sh 00:00:10.316 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.333 [Pipeline] httpRequest 00:00:10.684 [Pipeline] echo 00:00:10.687 Sorcerer 10.211.164.20 is alive 00:00:10.698 [Pipeline] retry 00:00:10.700 [Pipeline] { 00:00:10.716 [Pipeline] httpRequest 00:00:10.721 HttpMethod: GET 00:00:10.721 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:10.722 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:10.735 Response Code: HTTP/1.1 200 OK 00:00:10.735 Success: Status code 200 is in the accepted range: 200,404 00:00:10.736 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:30.045 [Pipeline] } 00:01:30.062 [Pipeline] // retry 00:01:30.070 [Pipeline] sh 00:01:30.349 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:33.648 [Pipeline] sh 00:01:33.925 + git -C spdk log --oneline -n5 00:01:33.925 b18e1bd62 version: v24.09.1-pre 00:01:33.925 19524ad45 version: v24.09 00:01:33.925 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:33.925 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:33.925 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:33.942 [Pipeline] withCredentials 00:01:33.952 > git --version # timeout=10 00:01:33.965 > git --version # 'git version 2.39.2' 00:01:33.980 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:33.982 [Pipeline] { 00:01:33.990 [Pipeline] retry 00:01:33.992 [Pipeline] { 00:01:34.006 [Pipeline] sh 00:01:34.284 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:34.295 [Pipeline] } 00:01:34.313 [Pipeline] // retry 00:01:34.319 [Pipeline] } 00:01:34.335 [Pipeline] // withCredentials 00:01:34.346 [Pipeline] httpRequest 00:01:34.729 [Pipeline] echo 00:01:34.731 Sorcerer 10.211.164.20 is alive 00:01:34.739 [Pipeline] retry 00:01:34.741 [Pipeline] { 00:01:34.752 [Pipeline] httpRequest 00:01:34.755 HttpMethod: GET 00:01:34.756 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:34.756 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:34.762 Response Code: HTTP/1.1 200 OK 00:01:34.763 Success: Status code 200 is in the accepted range: 200,404 00:01:34.763 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.682 [Pipeline] } 00:02:02.698 [Pipeline] // retry 00:02:02.704 [Pipeline] sh 00:02:02.982 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:04.368 [Pipeline] sh 00:02:04.647 + git -C dpdk log --oneline -n5 00:02:04.647 eeb0605f11 version: 23.11.0 00:02:04.647 238778122a doc: update release notes for 23.11 00:02:04.647 46aa6b3cfc doc: fix description of RSS features 00:02:04.647 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:04.647 7e421ae345 devtools: support skipping forbid rule check 00:02:04.665 [Pipeline] writeFile 00:02:04.682 [Pipeline] sh 00:02:04.962 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:04.973 [Pipeline] sh 00:02:05.268 + cat autorun-spdk.conf 00:02:05.268 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.268 SPDK_TEST_NVMF=1 00:02:05.268 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.268 SPDK_TEST_VFIOUSER=1 00:02:05.268 SPDK_TEST_USDT=1 00:02:05.268 SPDK_RUN_UBSAN=1 00:02:05.268 SPDK_TEST_NVMF_MDNS=1 00:02:05.268 NET_TYPE=virt 00:02:05.268 SPDK_JSONRPC_GO_CLIENT=1 00:02:05.268 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.268 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.268 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.339 RUN_NIGHTLY=1 00:02:05.341 [Pipeline] } 00:02:05.355 [Pipeline] // stage 00:02:05.371 [Pipeline] stage 00:02:05.374 [Pipeline] { (Run VM) 00:02:05.386 [Pipeline] sh 00:02:05.666 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:05.666 + echo 'Start stage prepare_nvme.sh' 00:02:05.666 Start stage prepare_nvme.sh 00:02:05.666 + [[ -n 3 ]] 00:02:05.666 + disk_prefix=ex3 00:02:05.666 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:02:05.666 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:02:05.666 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:02:05.666 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.666 ++ SPDK_TEST_NVMF=1 00:02:05.666 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.666 ++ SPDK_TEST_VFIOUSER=1 00:02:05.666 ++ SPDK_TEST_USDT=1 00:02:05.666 ++ SPDK_RUN_UBSAN=1 00:02:05.666 ++ SPDK_TEST_NVMF_MDNS=1 00:02:05.666 ++ NET_TYPE=virt 00:02:05.666 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:05.666 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.666 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.666 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.666 ++ RUN_NIGHTLY=1 00:02:05.666 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:05.666 + nvme_files=() 00:02:05.666 + declare -A nvme_files 00:02:05.666 + backend_dir=/var/lib/libvirt/images/backends 00:02:05.666 + nvme_files['nvme.img']=5G 00:02:05.666 + nvme_files['nvme-cmb.img']=5G 00:02:05.666 + nvme_files['nvme-multi0.img']=4G 00:02:05.666 + nvme_files['nvme-multi1.img']=4G 00:02:05.666 + nvme_files['nvme-multi2.img']=4G 00:02:05.666 + nvme_files['nvme-openstack.img']=8G 00:02:05.666 + nvme_files['nvme-zns.img']=5G 00:02:05.666 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:05.666 + (( SPDK_TEST_FTL == 1 )) 00:02:05.666 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:05.666 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:05.666 + for nvme in "${!nvme_files[@]}" 00:02:05.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:05.666 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.666 + for nvme in "${!nvme_files[@]}" 00:02:05.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:05.666 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.666 + for nvme in "${!nvme_files[@]}" 00:02:05.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:05.666 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:05.666 + for nvme in "${!nvme_files[@]}" 00:02:05.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:05.666 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.666 + for nvme in "${!nvme_files[@]}" 00:02:05.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:05.666 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.666 + for nvme in "${!nvme_files[@]}" 00:02:05.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:05.924 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.924 + for nvme in "${!nvme_files[@]}" 00:02:05.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:05.924 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.924 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:05.924 + echo 'End stage prepare_nvme.sh' 00:02:05.924 End stage prepare_nvme.sh 00:02:05.936 [Pipeline] sh 00:02:06.216 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:06.216 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:02:06.216 00:02:06.216 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:02:06.216 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:02:06.216 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:06.216 HELP=0 00:02:06.216 DRY_RUN=0 00:02:06.216 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:02:06.216 NVME_DISKS_TYPE=nvme,nvme, 00:02:06.216 NVME_AUTO_CREATE=0 00:02:06.216 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:02:06.216 NVME_CMB=,, 00:02:06.216 NVME_PMR=,, 00:02:06.216 NVME_ZNS=,, 00:02:06.216 NVME_MS=,, 00:02:06.216 NVME_FDP=,, 00:02:06.216 SPDK_VAGRANT_DISTRO=fedora39 00:02:06.216 SPDK_VAGRANT_VMCPU=10 00:02:06.216 SPDK_VAGRANT_VMRAM=12288 00:02:06.216 SPDK_VAGRANT_PROVIDER=libvirt 00:02:06.216 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:06.216 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:06.216 SPDK_OPENSTACK_NETWORK=0 00:02:06.216 VAGRANT_PACKAGE_BOX=0 00:02:06.216 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:06.216 FORCE_DISTRO=true 00:02:06.216 VAGRANT_BOX_VERSION= 00:02:06.216 EXTRA_VAGRANTFILES= 00:02:06.216 NIC_MODEL=e1000 00:02:06.216 00:02:06.216 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:02:06.216 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:09.499 Bringing machine 'default' up with 'libvirt' provider... 00:02:09.757 ==> default: Creating image (snapshot of base box volume). 00:02:10.016 ==> default: Creating domain with the following settings... 00:02:10.016 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732123371_139e25e4e8bf14e760c3 00:02:10.016 ==> default: -- Domain type: kvm 00:02:10.016 ==> default: -- Cpus: 10 00:02:10.016 ==> default: -- Feature: acpi 00:02:10.016 ==> default: -- Feature: apic 00:02:10.016 ==> default: -- Feature: pae 00:02:10.016 ==> default: -- Memory: 12288M 00:02:10.016 ==> default: -- Memory Backing: hugepages: 00:02:10.016 ==> default: -- Management MAC: 00:02:10.016 ==> default: -- Loader: 00:02:10.016 ==> default: -- Nvram: 00:02:10.016 ==> default: -- Base box: spdk/fedora39 00:02:10.016 ==> default: -- Storage pool: default 00:02:10.016 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732123371_139e25e4e8bf14e760c3.img (20G) 00:02:10.016 ==> default: -- Volume Cache: default 00:02:10.016 ==> default: -- Kernel: 00:02:10.016 ==> default: -- Initrd: 00:02:10.016 ==> default: -- Graphics Type: vnc 00:02:10.016 ==> default: -- Graphics Port: -1 00:02:10.016 ==> default: -- Graphics IP: 127.0.0.1 00:02:10.016 ==> default: -- Graphics Password: Not defined 00:02:10.016 ==> default: -- Video Type: cirrus 00:02:10.016 ==> default: -- Video VRAM: 9216 00:02:10.016 ==> default: -- Sound Type: 00:02:10.016 ==> default: -- Keymap: en-us 00:02:10.016 ==> default: -- TPM Path: 00:02:10.016 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:10.016 ==> default: -- Command line args: 00:02:10.016 ==> default: -> value=-device, 00:02:10.016 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:10.016 ==> default: -> value=-drive, 00:02:10.016 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:10.016 ==> default: -> value=-device, 00:02:10.016 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.016 ==> default: -> value=-device, 00:02:10.016 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:10.016 ==> default: -> value=-drive, 00:02:10.016 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:10.016 ==> default: -> value=-device, 00:02:10.016 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.016 ==> default: -> value=-drive, 00:02:10.017 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:10.017 ==> default: -> value=-device, 00:02:10.017 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.017 ==> default: -> value=-drive, 00:02:10.017 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:10.017 ==> default: -> value=-device, 00:02:10.017 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.275 ==> default: Creating shared folders metadata... 00:02:10.275 ==> default: Starting domain. 00:02:11.651 ==> default: Waiting for domain to get an IP address... 00:02:29.795 ==> default: Waiting for SSH to become available... 00:02:29.795 ==> default: Configuring and enabling network interfaces... 00:02:32.327 default: SSH address: 192.168.121.239:22 00:02:32.327 default: SSH username: vagrant 00:02:32.327 default: SSH auth method: private key 00:02:34.859 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:41.421 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:47.999 ==> default: Mounting SSHFS shared folder... 00:02:48.934 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:48.934 ==> default: Checking Mount.. 00:02:50.311 ==> default: Folder Successfully Mounted! 00:02:50.311 ==> default: Running provisioner: file... 00:02:51.247 default: ~/.gitconfig => .gitconfig 00:02:51.505 00:02:51.505 SUCCESS! 00:02:51.505 00:02:51.505 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:51.505 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:51.505 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:51.505 00:02:51.514 [Pipeline] } 00:02:51.529 [Pipeline] // stage 00:02:51.539 [Pipeline] dir 00:02:51.539 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:51.541 [Pipeline] { 00:02:51.553 [Pipeline] catchError 00:02:51.555 [Pipeline] { 00:02:51.567 [Pipeline] sh 00:02:51.845 + vagrant ssh-config --host vagrant 00:02:51.845 + sed -ne /^Host/,$p 00:02:51.845 + tee ssh_conf 00:02:54.379 Host vagrant 00:02:54.379 HostName 192.168.121.239 00:02:54.379 User vagrant 00:02:54.379 Port 22 00:02:54.379 UserKnownHostsFile /dev/null 00:02:54.379 StrictHostKeyChecking no 00:02:54.379 PasswordAuthentication no 00:02:54.379 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:54.379 IdentitiesOnly yes 00:02:54.379 LogLevel FATAL 00:02:54.379 ForwardAgent yes 00:02:54.379 ForwardX11 yes 00:02:54.379 00:02:54.393 [Pipeline] withEnv 00:02:54.395 [Pipeline] { 00:02:54.410 [Pipeline] sh 00:02:54.689 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:54.689 source /etc/os-release 00:02:54.689 [[ -e /image.version ]] && img=$(< /image.version) 00:02:54.689 # Minimal, systemd-like check. 00:02:54.689 if [[ -e /.dockerenv ]]; then 00:02:54.689 # Clear garbage from the node's name: 00:02:54.689 # agt-er_autotest_547-896 -> autotest_547-896 00:02:54.689 # $HOSTNAME is the actual container id 00:02:54.689 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:54.689 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:54.689 # We can assume this is a mount from a host where container is running, 00:02:54.689 # so fetch its hostname to easily identify the target swarm worker. 00:02:54.689 container="$(< /etc/hostname) ($agent)" 00:02:54.689 else 00:02:54.689 # Fallback 00:02:54.689 container=$agent 00:02:54.689 fi 00:02:54.689 fi 00:02:54.690 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:54.690 00:02:54.959 [Pipeline] } 00:02:54.980 [Pipeline] // withEnv 00:02:54.990 [Pipeline] setCustomBuildProperty 00:02:55.006 [Pipeline] stage 00:02:55.008 [Pipeline] { (Tests) 00:02:55.028 [Pipeline] sh 00:02:55.307 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:55.580 [Pipeline] sh 00:02:55.859 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:56.145 [Pipeline] timeout 00:02:56.145 Timeout set to expire in 1 hr 0 min 00:02:56.147 [Pipeline] { 00:02:56.161 [Pipeline] sh 00:02:56.440 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:57.007 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:57.019 [Pipeline] sh 00:02:57.297 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:57.564 [Pipeline] sh 00:02:57.838 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:58.114 [Pipeline] sh 00:02:58.398 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:58.656 ++ readlink -f spdk_repo 00:02:58.656 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:58.656 + [[ -n /home/vagrant/spdk_repo ]] 00:02:58.656 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:58.656 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:58.656 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:58.656 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:58.656 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:58.656 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:58.656 + cd /home/vagrant/spdk_repo 00:02:58.656 + source /etc/os-release 00:02:58.656 ++ NAME='Fedora Linux' 00:02:58.656 ++ VERSION='39 (Cloud Edition)' 00:02:58.656 ++ ID=fedora 00:02:58.656 ++ VERSION_ID=39 00:02:58.656 ++ VERSION_CODENAME= 00:02:58.656 ++ PLATFORM_ID=platform:f39 00:02:58.656 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:58.656 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:58.656 ++ LOGO=fedora-logo-icon 00:02:58.656 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:58.656 ++ HOME_URL=https://fedoraproject.org/ 00:02:58.656 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:58.656 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:58.656 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:58.656 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:58.656 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:58.656 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:58.656 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:58.656 ++ SUPPORT_END=2024-11-12 00:02:58.656 ++ VARIANT='Cloud Edition' 00:02:58.656 ++ VARIANT_ID=cloud 00:02:58.656 + uname -a 00:02:58.656 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:58.656 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:58.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:58.914 Hugepages 00:02:58.914 node hugesize free / total 00:02:58.914 node0 1048576kB 0 / 0 00:02:58.914 node0 2048kB 0 / 0 00:02:58.914 00:02:58.914 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.173 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:59.173 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:59.173 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:59.173 + rm -f /tmp/spdk-ld-path 00:02:59.173 + source autorun-spdk.conf 00:02:59.173 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.173 ++ SPDK_TEST_NVMF=1 00:02:59.173 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:59.173 ++ SPDK_TEST_VFIOUSER=1 00:02:59.173 ++ SPDK_TEST_USDT=1 00:02:59.173 ++ SPDK_RUN_UBSAN=1 00:02:59.173 ++ SPDK_TEST_NVMF_MDNS=1 00:02:59.173 ++ NET_TYPE=virt 00:02:59.173 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:59.173 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:59.174 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.174 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.174 ++ RUN_NIGHTLY=1 00:02:59.174 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:59.174 + [[ -n '' ]] 00:02:59.174 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:59.174 + for M in /var/spdk/build-*-manifest.txt 00:02:59.174 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:59.174 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.174 + for M in /var/spdk/build-*-manifest.txt 00:02:59.174 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:59.174 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.174 + for M in /var/spdk/build-*-manifest.txt 00:02:59.174 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:59.174 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.174 ++ uname 00:02:59.174 + [[ Linux == \L\i\n\u\x ]] 00:02:59.174 + sudo dmesg -T 00:02:59.174 + sudo dmesg --clear 00:02:59.174 + dmesg_pid=5994 00:02:59.174 + [[ Fedora Linux == FreeBSD ]] 00:02:59.174 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.174 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.174 + sudo dmesg -Tw 00:02:59.174 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:59.174 + [[ -x /usr/src/fio-static/fio ]] 00:02:59.174 + export FIO_BIN=/usr/src/fio-static/fio 00:02:59.174 + FIO_BIN=/usr/src/fio-static/fio 00:02:59.174 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:59.174 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:59.174 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:59.174 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.174 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.174 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:59.174 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.174 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.174 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:59.174 Test configuration: 00:02:59.174 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.174 SPDK_TEST_NVMF=1 00:02:59.174 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:59.174 SPDK_TEST_VFIOUSER=1 00:02:59.174 SPDK_TEST_USDT=1 00:02:59.174 SPDK_RUN_UBSAN=1 00:02:59.174 SPDK_TEST_NVMF_MDNS=1 00:02:59.174 NET_TYPE=virt 00:02:59.174 SPDK_JSONRPC_GO_CLIENT=1 00:02:59.174 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:59.174 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.174 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.433 RUN_NIGHTLY=1 17:23:40 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:59.433 17:23:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.433 17:23:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:59.433 17:23:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:59.433 17:23:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.433 17:23:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.433 17:23:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.433 17:23:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.433 17:23:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.433 17:23:40 -- paths/export.sh@5 -- $ export PATH 00:02:59.433 17:23:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.433 17:23:40 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:59.433 17:23:40 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:59.433 17:23:40 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732123420.XXXXXX 00:02:59.433 17:23:40 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732123420.XPJdLR 00:02:59.433 17:23:40 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:59.433 17:23:40 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:59.433 17:23:40 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:59.433 17:23:40 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:59.433 17:23:40 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:59.433 17:23:40 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:59.433 17:23:40 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:59.433 17:23:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:59.433 17:23:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.433 17:23:41 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:59.433 17:23:41 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:59.433 17:23:41 -- pm/common@17 -- $ local monitor 00:02:59.433 17:23:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.433 17:23:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.433 17:23:41 -- pm/common@25 -- $ sleep 1 00:02:59.433 17:23:41 -- pm/common@21 -- $ date +%s 00:02:59.433 17:23:41 -- pm/common@21 -- $ date +%s 00:02:59.433 17:23:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732123421 00:02:59.433 17:23:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732123421 00:02:59.433 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732123421_collect-cpu-load.pm.log 00:02:59.433 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732123421_collect-vmstat.pm.log 00:03:00.367 17:23:42 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:00.367 17:23:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:00.367 17:23:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:00.367 17:23:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.367 17:23:42 -- spdk/autobuild.sh@16 -- $ date -u 00:03:00.367 Wed Nov 20 05:23:42 PM UTC 2024 00:03:00.368 17:23:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:00.368 v24.09-1-gb18e1bd62 00:03:00.368 17:23:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:00.368 17:23:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:00.368 17:23:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:00.368 17:23:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:00.368 17:23:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.368 17:23:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.368 ************************************ 00:03:00.368 START TEST ubsan 00:03:00.368 ************************************ 00:03:00.368 using ubsan 00:03:00.368 17:23:42 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:00.368 00:03:00.368 real 0m0.000s 00:03:00.368 user 0m0.000s 00:03:00.368 sys 0m0.000s 00:03:00.368 17:23:42 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:00.368 17:23:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:00.368 ************************************ 00:03:00.368 END TEST ubsan 00:03:00.368 ************************************ 00:03:00.368 17:23:42 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:00.368 17:23:42 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:00.368 17:23:42 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:00.368 17:23:42 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:00.368 17:23:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.368 17:23:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.368 ************************************ 00:03:00.368 START TEST build_native_dpdk 00:03:00.368 ************************************ 00:03:00.368 17:23:42 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:00.368 eeb0605f11 version: 23.11.0 00:03:00.368 238778122a doc: update release notes for 23.11 00:03:00.368 46aa6b3cfc doc: fix description of RSS features 00:03:00.368 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:00.368 7e421ae345 devtools: support skipping forbid rule check 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:00.368 17:23:42 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:00.625 17:23:42 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:00.625 17:23:42 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:00.625 17:23:42 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:00.625 patching file config/rte_config.h 00:03:00.625 Hunk #1 succeeded at 60 (offset 1 line). 00:03:00.625 17:23:42 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.625 17:23:42 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:00.625 17:23:42 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:00.626 patching file lib/pcapng/rte_pcapng.c 00:03:00.626 17:23:42 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.626 17:23:42 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:00.626 17:23:42 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:00.626 17:23:42 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:00.626 17:23:42 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:00.626 17:23:42 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:00.626 17:23:42 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:05.891 The Meson build system 00:03:05.891 Version: 1.5.0 00:03:05.891 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:05.891 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:05.891 Build type: native build 00:03:05.891 Program cat found: YES (/usr/bin/cat) 00:03:05.891 Project name: DPDK 00:03:05.891 Project version: 23.11.0 00:03:05.891 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:05.891 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:05.891 Host machine cpu family: x86_64 00:03:05.891 Host machine cpu: x86_64 00:03:05.891 Message: ## Building in Developer Mode ## 00:03:05.891 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.891 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:05.891 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.891 Program python3 found: YES (/usr/bin/python3) 00:03:05.891 Program cat found: YES (/usr/bin/cat) 00:03:05.891 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:05.891 Compiler for C supports arguments -march=native: YES 00:03:05.891 Checking for size of "void *" : 8 00:03:05.891 Checking for size of "void *" : 8 (cached) 00:03:05.892 Library m found: YES 00:03:05.892 Library numa found: YES 00:03:05.892 Has header "numaif.h" : YES 00:03:05.892 Library fdt found: NO 00:03:05.892 Library execinfo found: NO 00:03:05.892 Has header "execinfo.h" : YES 00:03:05.892 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:05.892 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.892 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.892 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.892 Run-time dependency openssl found: YES 3.1.1 00:03:05.892 Run-time dependency libpcap found: YES 1.10.4 00:03:05.892 Has header "pcap.h" with dependency libpcap: YES 00:03:05.892 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.892 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.892 Compiler for C supports arguments -Wformat: YES 00:03:05.892 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.892 Compiler for C supports arguments -Wformat-security: NO 00:03:05.892 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.892 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.892 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.892 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.892 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.892 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.892 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.892 Compiler for C supports arguments -Wundef: YES 00:03:05.892 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.892 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.892 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.892 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.892 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.892 Program objdump found: YES (/usr/bin/objdump) 00:03:05.892 Compiler for C supports arguments -mavx512f: YES 00:03:05.892 Checking if "AVX512 checking" compiles: YES 00:03:05.892 Fetching value of define "__SSE4_2__" : 1 00:03:05.892 Fetching value of define "__AES__" : 1 00:03:05.892 Fetching value of define "__AVX__" : 1 00:03:05.892 Fetching value of define "__AVX2__" : 1 00:03:05.892 Fetching value of define "__AVX512BW__" : (undefined) 00:03:05.892 Fetching value of define "__AVX512CD__" : (undefined) 00:03:05.892 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:05.892 Fetching value of define "__AVX512F__" : (undefined) 00:03:05.892 Fetching value of define "__AVX512VL__" : (undefined) 00:03:05.892 Fetching value of define "__PCLMUL__" : 1 00:03:05.892 Fetching value of define "__RDRND__" : 1 00:03:05.892 Fetching value of define "__RDSEED__" : 1 00:03:05.892 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:05.892 Fetching value of define "__znver1__" : (undefined) 00:03:05.892 Fetching value of define "__znver2__" : (undefined) 00:03:05.892 Fetching value of define "__znver3__" : (undefined) 00:03:05.892 Fetching value of define "__znver4__" : (undefined) 00:03:05.892 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.892 Message: lib/log: Defining dependency "log" 00:03:05.892 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.892 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.892 Checking for function "getentropy" : NO 00:03:05.892 Message: lib/eal: Defining dependency "eal" 00:03:05.892 Message: lib/ring: Defining dependency "ring" 00:03:05.892 Message: lib/rcu: Defining dependency "rcu" 00:03:05.892 Message: lib/mempool: Defining dependency "mempool" 00:03:05.892 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.892 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:05.892 Compiler for C supports arguments -mpclmul: YES 00:03:05.892 Compiler for C supports arguments -maes: YES 00:03:05.892 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.892 Compiler for C supports arguments -mavx512bw: YES 00:03:05.892 Compiler for C supports arguments -mavx512dq: YES 00:03:05.892 Compiler for C supports arguments -mavx512vl: YES 00:03:05.892 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.892 Compiler for C supports arguments -mavx2: YES 00:03:05.892 Compiler for C supports arguments -mavx: YES 00:03:05.892 Message: lib/net: Defining dependency "net" 00:03:05.892 Message: lib/meter: Defining dependency "meter" 00:03:05.892 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.892 Message: lib/pci: Defining dependency "pci" 00:03:05.892 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.892 Message: lib/metrics: Defining dependency "metrics" 00:03:05.892 Message: lib/hash: Defining dependency "hash" 00:03:05.892 Message: lib/timer: Defining dependency "timer" 00:03:05.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:05.892 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:05.892 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:05.892 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:05.892 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:05.892 Message: lib/acl: Defining dependency "acl" 00:03:05.892 Message: lib/bbdev: Defining dependency "bbdev" 00:03:05.892 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:05.892 Run-time dependency libelf found: YES 0.191 00:03:05.892 Message: lib/bpf: Defining dependency "bpf" 00:03:05.892 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:05.892 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.892 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.892 Message: lib/distributor: Defining dependency "distributor" 00:03:05.892 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.892 Message: lib/efd: Defining dependency "efd" 00:03:05.892 Message: lib/eventdev: Defining dependency "eventdev" 00:03:05.892 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:05.892 Message: lib/gpudev: Defining dependency "gpudev" 00:03:05.892 Message: lib/gro: Defining dependency "gro" 00:03:05.892 Message: lib/gso: Defining dependency "gso" 00:03:05.892 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:05.892 Message: lib/jobstats: Defining dependency "jobstats" 00:03:05.892 Message: lib/latencystats: Defining dependency "latencystats" 00:03:05.892 Message: lib/lpm: Defining dependency "lpm" 00:03:05.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:05.892 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:05.892 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:05.892 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:05.892 Message: lib/member: Defining dependency "member" 00:03:05.892 Message: lib/pcapng: Defining dependency "pcapng" 00:03:05.892 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.892 Message: lib/power: Defining dependency "power" 00:03:05.892 Message: lib/rawdev: Defining dependency "rawdev" 00:03:05.892 Message: lib/regexdev: Defining dependency "regexdev" 00:03:05.892 Message: lib/mldev: Defining dependency "mldev" 00:03:05.892 Message: lib/rib: Defining dependency "rib" 00:03:05.892 Message: lib/reorder: Defining dependency "reorder" 00:03:05.892 Message: lib/sched: Defining dependency "sched" 00:03:05.892 Message: lib/security: Defining dependency "security" 00:03:05.892 Message: lib/stack: Defining dependency "stack" 00:03:05.892 Has header "linux/userfaultfd.h" : YES 00:03:05.892 Has header "linux/vduse.h" : YES 00:03:05.892 Message: lib/vhost: Defining dependency "vhost" 00:03:05.892 Message: lib/ipsec: Defining dependency "ipsec" 00:03:05.892 Message: lib/pdcp: Defining dependency "pdcp" 00:03:05.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:05.892 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:05.892 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:05.892 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:05.892 Message: lib/fib: Defining dependency "fib" 00:03:05.892 Message: lib/port: Defining dependency "port" 00:03:05.892 Message: lib/pdump: Defining dependency "pdump" 00:03:05.892 Message: lib/table: Defining dependency "table" 00:03:05.892 Message: lib/pipeline: Defining dependency "pipeline" 00:03:05.892 Message: lib/graph: Defining dependency "graph" 00:03:05.892 Message: lib/node: Defining dependency "node" 00:03:05.892 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:07.795 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:07.795 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:07.795 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:07.795 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:07.795 Compiler for C supports arguments -Wno-unused-value: YES 00:03:07.795 Compiler for C supports arguments -Wno-format: YES 00:03:07.795 Compiler for C supports arguments -Wno-format-security: YES 00:03:07.795 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:07.795 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:07.795 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:07.795 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:07.795 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:07.795 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:07.795 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:07.795 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:07.795 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:07.795 Has header "sys/epoll.h" : YES 00:03:07.795 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:07.795 Configuring doxy-api-html.conf using configuration 00:03:07.795 Configuring doxy-api-man.conf using configuration 00:03:07.795 Program mandb found: YES (/usr/bin/mandb) 00:03:07.795 Program sphinx-build found: NO 00:03:07.795 Configuring rte_build_config.h using configuration 00:03:07.795 Message: 00:03:07.795 ================= 00:03:07.795 Applications Enabled 00:03:07.795 ================= 00:03:07.795 00:03:07.795 apps: 00:03:07.795 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:07.795 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:07.795 test-pmd, test-regex, test-sad, test-security-perf, 00:03:07.795 00:03:07.795 Message: 00:03:07.795 ================= 00:03:07.795 Libraries Enabled 00:03:07.795 ================= 00:03:07.795 00:03:07.795 libs: 00:03:07.795 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:07.795 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:07.795 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:07.795 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:07.795 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:07.795 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:07.795 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:07.795 00:03:07.795 00:03:07.795 Message: 00:03:07.795 =============== 00:03:07.795 Drivers Enabled 00:03:07.795 =============== 00:03:07.795 00:03:07.795 common: 00:03:07.795 00:03:07.795 bus: 00:03:07.795 pci, vdev, 00:03:07.795 mempool: 00:03:07.795 ring, 00:03:07.795 dma: 00:03:07.795 00:03:07.795 net: 00:03:07.795 i40e, 00:03:07.795 raw: 00:03:07.795 00:03:07.795 crypto: 00:03:07.795 00:03:07.795 compress: 00:03:07.795 00:03:07.795 regex: 00:03:07.795 00:03:07.795 ml: 00:03:07.795 00:03:07.795 vdpa: 00:03:07.795 00:03:07.795 event: 00:03:07.795 00:03:07.796 baseband: 00:03:07.796 00:03:07.796 gpu: 00:03:07.796 00:03:07.796 00:03:07.796 Message: 00:03:07.796 ================= 00:03:07.796 Content Skipped 00:03:07.796 ================= 00:03:07.796 00:03:07.796 apps: 00:03:07.796 00:03:07.796 libs: 00:03:07.796 00:03:07.796 drivers: 00:03:07.796 common/cpt: not in enabled drivers build config 00:03:07.796 common/dpaax: not in enabled drivers build config 00:03:07.796 common/iavf: not in enabled drivers build config 00:03:07.796 common/idpf: not in enabled drivers build config 00:03:07.796 common/mvep: not in enabled drivers build config 00:03:07.796 common/octeontx: not in enabled drivers build config 00:03:07.796 bus/auxiliary: not in enabled drivers build config 00:03:07.796 bus/cdx: not in enabled drivers build config 00:03:07.796 bus/dpaa: not in enabled drivers build config 00:03:07.796 bus/fslmc: not in enabled drivers build config 00:03:07.796 bus/ifpga: not in enabled drivers build config 00:03:07.796 bus/platform: not in enabled drivers build config 00:03:07.796 bus/vmbus: not in enabled drivers build config 00:03:07.796 common/cnxk: not in enabled drivers build config 00:03:07.796 common/mlx5: not in enabled drivers build config 00:03:07.796 common/nfp: not in enabled drivers build config 00:03:07.796 common/qat: not in enabled drivers build config 00:03:07.796 common/sfc_efx: not in enabled drivers build config 00:03:07.796 mempool/bucket: not in enabled drivers build config 00:03:07.796 mempool/cnxk: not in enabled drivers build config 00:03:07.796 mempool/dpaa: not in enabled drivers build config 00:03:07.796 mempool/dpaa2: not in enabled drivers build config 00:03:07.796 mempool/octeontx: not in enabled drivers build config 00:03:07.796 mempool/stack: not in enabled drivers build config 00:03:07.796 dma/cnxk: not in enabled drivers build config 00:03:07.796 dma/dpaa: not in enabled drivers build config 00:03:07.796 dma/dpaa2: not in enabled drivers build config 00:03:07.796 dma/hisilicon: not in enabled drivers build config 00:03:07.796 dma/idxd: not in enabled drivers build config 00:03:07.796 dma/ioat: not in enabled drivers build config 00:03:07.796 dma/skeleton: not in enabled drivers build config 00:03:07.796 net/af_packet: not in enabled drivers build config 00:03:07.796 net/af_xdp: not in enabled drivers build config 00:03:07.796 net/ark: not in enabled drivers build config 00:03:07.796 net/atlantic: not in enabled drivers build config 00:03:07.796 net/avp: not in enabled drivers build config 00:03:07.796 net/axgbe: not in enabled drivers build config 00:03:07.796 net/bnx2x: not in enabled drivers build config 00:03:07.796 net/bnxt: not in enabled drivers build config 00:03:07.796 net/bonding: not in enabled drivers build config 00:03:07.796 net/cnxk: not in enabled drivers build config 00:03:07.796 net/cpfl: not in enabled drivers build config 00:03:07.796 net/cxgbe: not in enabled drivers build config 00:03:07.796 net/dpaa: not in enabled drivers build config 00:03:07.796 net/dpaa2: not in enabled drivers build config 00:03:07.796 net/e1000: not in enabled drivers build config 00:03:07.796 net/ena: not in enabled drivers build config 00:03:07.796 net/enetc: not in enabled drivers build config 00:03:07.796 net/enetfec: not in enabled drivers build config 00:03:07.796 net/enic: not in enabled drivers build config 00:03:07.796 net/failsafe: not in enabled drivers build config 00:03:07.796 net/fm10k: not in enabled drivers build config 00:03:07.796 net/gve: not in enabled drivers build config 00:03:07.796 net/hinic: not in enabled drivers build config 00:03:07.796 net/hns3: not in enabled drivers build config 00:03:07.796 net/iavf: not in enabled drivers build config 00:03:07.796 net/ice: not in enabled drivers build config 00:03:07.796 net/idpf: not in enabled drivers build config 00:03:07.796 net/igc: not in enabled drivers build config 00:03:07.796 net/ionic: not in enabled drivers build config 00:03:07.796 net/ipn3ke: not in enabled drivers build config 00:03:07.796 net/ixgbe: not in enabled drivers build config 00:03:07.796 net/mana: not in enabled drivers build config 00:03:07.796 net/memif: not in enabled drivers build config 00:03:07.796 net/mlx4: not in enabled drivers build config 00:03:07.796 net/mlx5: not in enabled drivers build config 00:03:07.796 net/mvneta: not in enabled drivers build config 00:03:07.796 net/mvpp2: not in enabled drivers build config 00:03:07.796 net/netvsc: not in enabled drivers build config 00:03:07.796 net/nfb: not in enabled drivers build config 00:03:07.796 net/nfp: not in enabled drivers build config 00:03:07.796 net/ngbe: not in enabled drivers build config 00:03:07.796 net/null: not in enabled drivers build config 00:03:07.796 net/octeontx: not in enabled drivers build config 00:03:07.796 net/octeon_ep: not in enabled drivers build config 00:03:07.796 net/pcap: not in enabled drivers build config 00:03:07.796 net/pfe: not in enabled drivers build config 00:03:07.796 net/qede: not in enabled drivers build config 00:03:07.796 net/ring: not in enabled drivers build config 00:03:07.796 net/sfc: not in enabled drivers build config 00:03:07.796 net/softnic: not in enabled drivers build config 00:03:07.796 net/tap: not in enabled drivers build config 00:03:07.796 net/thunderx: not in enabled drivers build config 00:03:07.796 net/txgbe: not in enabled drivers build config 00:03:07.796 net/vdev_netvsc: not in enabled drivers build config 00:03:07.796 net/vhost: not in enabled drivers build config 00:03:07.796 net/virtio: not in enabled drivers build config 00:03:07.796 net/vmxnet3: not in enabled drivers build config 00:03:07.796 raw/cnxk_bphy: not in enabled drivers build config 00:03:07.796 raw/cnxk_gpio: not in enabled drivers build config 00:03:07.796 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:07.796 raw/ifpga: not in enabled drivers build config 00:03:07.796 raw/ntb: not in enabled drivers build config 00:03:07.796 raw/skeleton: not in enabled drivers build config 00:03:07.796 crypto/armv8: not in enabled drivers build config 00:03:07.796 crypto/bcmfs: not in enabled drivers build config 00:03:07.796 crypto/caam_jr: not in enabled drivers build config 00:03:07.796 crypto/ccp: not in enabled drivers build config 00:03:07.796 crypto/cnxk: not in enabled drivers build config 00:03:07.796 crypto/dpaa_sec: not in enabled drivers build config 00:03:07.796 crypto/dpaa2_sec: not in enabled drivers build config 00:03:07.796 crypto/ipsec_mb: not in enabled drivers build config 00:03:07.796 crypto/mlx5: not in enabled drivers build config 00:03:07.796 crypto/mvsam: not in enabled drivers build config 00:03:07.796 crypto/nitrox: not in enabled drivers build config 00:03:07.796 crypto/null: not in enabled drivers build config 00:03:07.796 crypto/octeontx: not in enabled drivers build config 00:03:07.796 crypto/openssl: not in enabled drivers build config 00:03:07.796 crypto/scheduler: not in enabled drivers build config 00:03:07.796 crypto/uadk: not in enabled drivers build config 00:03:07.796 crypto/virtio: not in enabled drivers build config 00:03:07.796 compress/isal: not in enabled drivers build config 00:03:07.796 compress/mlx5: not in enabled drivers build config 00:03:07.796 compress/octeontx: not in enabled drivers build config 00:03:07.796 compress/zlib: not in enabled drivers build config 00:03:07.796 regex/mlx5: not in enabled drivers build config 00:03:07.796 regex/cn9k: not in enabled drivers build config 00:03:07.796 ml/cnxk: not in enabled drivers build config 00:03:07.796 vdpa/ifc: not in enabled drivers build config 00:03:07.796 vdpa/mlx5: not in enabled drivers build config 00:03:07.796 vdpa/nfp: not in enabled drivers build config 00:03:07.796 vdpa/sfc: not in enabled drivers build config 00:03:07.796 event/cnxk: not in enabled drivers build config 00:03:07.796 event/dlb2: not in enabled drivers build config 00:03:07.796 event/dpaa: not in enabled drivers build config 00:03:07.796 event/dpaa2: not in enabled drivers build config 00:03:07.796 event/dsw: not in enabled drivers build config 00:03:07.796 event/opdl: not in enabled drivers build config 00:03:07.796 event/skeleton: not in enabled drivers build config 00:03:07.796 event/sw: not in enabled drivers build config 00:03:07.796 event/octeontx: not in enabled drivers build config 00:03:07.796 baseband/acc: not in enabled drivers build config 00:03:07.796 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:07.796 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:07.796 baseband/la12xx: not in enabled drivers build config 00:03:07.796 baseband/null: not in enabled drivers build config 00:03:07.796 baseband/turbo_sw: not in enabled drivers build config 00:03:07.796 gpu/cuda: not in enabled drivers build config 00:03:07.796 00:03:07.796 00:03:07.796 Build targets in project: 220 00:03:07.796 00:03:07.796 DPDK 23.11.0 00:03:07.796 00:03:07.796 User defined options 00:03:07.796 libdir : lib 00:03:07.796 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:07.797 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:07.797 c_link_args : 00:03:07.797 enable_docs : false 00:03:07.797 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:07.797 enable_kmods : false 00:03:07.797 machine : native 00:03:07.797 tests : false 00:03:07.797 00:03:07.797 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.797 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:07.797 17:23:49 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:07.797 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:08.055 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:08.055 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:08.055 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:08.055 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:08.055 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:08.055 [6/710] Linking static target lib/librte_kvargs.a 00:03:08.055 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:08.055 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:08.055 [9/710] Linking static target lib/librte_log.a 00:03:08.055 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:08.314 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.573 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:08.573 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.573 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:08.573 [15/710] Linking target lib/librte_log.so.24.0 00:03:08.573 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:08.573 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:08.831 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:08.831 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:08.831 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:08.831 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:09.090 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:09.090 [23/710] Linking target lib/librte_kvargs.so.24.0 00:03:09.090 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:09.090 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:09.090 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:09.348 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:09.348 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:09.348 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:09.348 [30/710] Linking static target lib/librte_telemetry.a 00:03:09.348 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:09.348 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:09.606 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:09.606 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.606 [35/710] Linking target lib/librte_telemetry.so.24.0 00:03:09.606 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:09.606 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:09.864 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:09.864 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:09.864 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:09.864 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:09.864 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:09.864 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:09.864 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:09.864 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:10.122 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:10.122 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:10.380 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:10.380 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:10.380 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:10.380 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:10.639 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:10.639 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:10.639 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:10.639 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:10.897 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:10.897 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:10.897 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:10.897 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:10.897 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:10.897 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:10.897 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.156 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.156 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.156 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.156 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.156 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:11.156 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.434 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:11.692 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:11.692 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:11.692 [72/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:11.692 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:11.692 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:11.692 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:11.692 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:11.692 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:11.692 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:11.952 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.210 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:12.210 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:12.211 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:12.211 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:12.211 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:12.469 [85/710] Linking static target lib/librte_ring.a 00:03:12.469 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:12.469 [87/710] Linking static target lib/librte_eal.a 00:03:12.469 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:12.469 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:12.470 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.728 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:12.728 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:12.728 [93/710] Linking static target lib/librte_mempool.a 00:03:12.728 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.728 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:12.987 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:12.987 [97/710] Linking static target lib/librte_rcu.a 00:03:13.246 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:13.246 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:13.246 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:13.246 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.246 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:13.505 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.505 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:13.505 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:13.505 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:13.764 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:13.764 [108/710] Linking static target lib/librte_mbuf.a 00:03:13.764 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:13.764 [110/710] Linking static target lib/librte_net.a 00:03:14.022 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:14.022 [112/710] Linking static target lib/librte_meter.a 00:03:14.022 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.023 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:14.281 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:14.281 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:14.281 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.281 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:14.281 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.849 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:14.849 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:15.108 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:15.108 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:15.108 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:15.367 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:15.367 [126/710] Linking static target lib/librte_pci.a 00:03:15.367 [127/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:15.367 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:15.367 [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.367 [130/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:15.627 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:15.627 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:15.627 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:15.627 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:15.627 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:15.627 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:15.627 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:15.627 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:15.627 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:15.627 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:15.886 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:15.886 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:16.144 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:16.144 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:16.144 [145/710] Linking static target lib/librte_cmdline.a 00:03:16.403 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:16.403 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:16.403 [148/710] Linking static target lib/librte_metrics.a 00:03:16.403 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:16.403 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:16.661 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.919 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.919 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:16.919 [154/710] Linking static target lib/librte_timer.a 00:03:17.178 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:17.436 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.694 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:17.694 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:17.694 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:17.952 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:18.521 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:18.521 [162/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:18.521 [163/710] Linking static target lib/librte_ethdev.a 00:03:18.521 [164/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:18.521 [165/710] Linking static target lib/librte_bitratestats.a 00:03:18.521 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:18.521 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.521 [168/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.779 [169/710] Linking target lib/librte_eal.so.24.0 00:03:18.779 [170/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:18.779 [171/710] Linking static target lib/librte_bbdev.a 00:03:18.779 [172/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:18.779 [173/710] Linking target lib/librte_ring.so.24.0 00:03:18.779 [174/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:19.038 [175/710] Linking target lib/librte_meter.so.24.0 00:03:19.038 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:19.038 [177/710] Linking target lib/librte_rcu.so.24.0 00:03:19.038 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:19.038 [179/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:19.038 [180/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:19.038 [181/710] Linking static target lib/librte_hash.a 00:03:19.038 [182/710] Linking target lib/librte_mempool.so.24.0 00:03:19.038 [183/710] Linking target lib/librte_pci.so.24.0 00:03:19.038 [184/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:19.297 [185/710] Linking target lib/librte_timer.so.24.0 00:03:19.297 [186/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:19.297 [187/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:19.297 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:03:19.297 [189/710] Linking target lib/librte_mbuf.so.24.0 00:03:19.297 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:19.297 [191/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:19.297 [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:19.297 [193/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:19.297 [194/710] Linking static target lib/acl/libavx512_tmp.a 00:03:19.297 [195/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:19.297 [196/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.555 [197/710] Linking target lib/librte_net.so.24.0 00:03:19.555 [198/710] Linking target lib/librte_bbdev.so.24.0 00:03:19.555 [199/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:19.555 [200/710] Linking target lib/librte_cmdline.so.24.0 00:03:19.555 [201/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.555 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:19.555 [203/710] Linking target lib/librte_hash.so.24.0 00:03:19.555 [204/710] Linking static target lib/librte_acl.a 00:03:19.814 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:19.814 [206/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:19.814 [207/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:19.814 [208/710] Linking static target lib/librte_cfgfile.a 00:03:20.072 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.072 [210/710] Linking target lib/librte_acl.so.24.0 00:03:20.073 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:20.073 [212/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:20.331 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:20.331 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.331 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:20.331 [216/710] Linking target lib/librte_cfgfile.so.24.0 00:03:20.589 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.589 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:20.589 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:20.589 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:20.589 [221/710] Linking static target lib/librte_bpf.a 00:03:20.848 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:20.848 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:20.848 [224/710] Linking static target lib/librte_compressdev.a 00:03:20.848 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.106 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:21.106 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:21.106 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:21.365 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.365 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:21.365 [231/710] Linking static target lib/librte_distributor.a 00:03:21.365 [232/710] Linking target lib/librte_compressdev.so.24.0 00:03:21.365 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:21.623 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.623 [235/710] Linking target lib/librte_distributor.so.24.0 00:03:21.623 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:21.623 [237/710] Linking static target lib/librte_dmadev.a 00:03:21.880 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:22.138 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.138 [240/710] Linking target lib/librte_dmadev.so.24.0 00:03:22.138 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:22.138 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:22.396 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:22.654 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:22.654 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:22.654 [246/710] Linking static target lib/librte_efd.a 00:03:22.912 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:22.912 [248/710] Linking static target lib/librte_cryptodev.a 00:03:22.912 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:22.912 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.912 [251/710] Linking target lib/librte_efd.so.24.0 00:03:23.170 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.170 [253/710] Linking target lib/librte_ethdev.so.24.0 00:03:23.170 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:23.170 [255/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:23.170 [256/710] Linking static target lib/librte_dispatcher.a 00:03:23.428 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:23.428 [258/710] Linking target lib/librte_metrics.so.24.0 00:03:23.428 [259/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:23.428 [260/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:23.687 [261/710] Linking target lib/librte_bitratestats.so.24.0 00:03:23.687 [262/710] Linking static target lib/librte_gpudev.a 00:03:23.687 [263/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:23.687 [264/710] Linking target lib/librte_bpf.so.24.0 00:03:23.687 [265/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.687 [266/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:23.687 [267/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:23.687 [268/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:23.945 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:23.945 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:24.203 [271/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.203 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:03:24.203 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:24.460 [274/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:24.460 [275/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:24.460 [276/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.460 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:24.460 [278/710] Linking target lib/librte_gpudev.so.24.0 00:03:24.460 [279/710] Linking static target lib/librte_eventdev.a 00:03:24.460 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:24.460 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:24.460 [282/710] Linking static target lib/librte_gro.a 00:03:24.718 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:24.718 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:24.718 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:24.718 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.718 [287/710] Linking target lib/librte_gro.so.24.0 00:03:24.976 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:24.977 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:24.977 [290/710] Linking static target lib/librte_gso.a 00:03:25.235 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:25.235 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.235 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:25.235 [294/710] Linking target lib/librte_gso.so.24.0 00:03:25.235 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:25.536 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:25.536 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:25.536 [298/710] Linking static target lib/librte_jobstats.a 00:03:25.536 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:25.536 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:25.536 [301/710] Linking static target lib/librte_ip_frag.a 00:03:25.813 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:25.813 [303/710] Linking static target lib/librte_latencystats.a 00:03:25.813 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.813 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:25.813 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.813 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.813 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:03:26.073 [309/710] Linking target lib/librte_latencystats.so.24.0 00:03:26.073 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:26.073 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:26.073 [312/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:26.073 [313/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:26.073 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:26.073 [315/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:26.073 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:26.330 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:26.330 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.589 [319/710] Linking target lib/librte_eventdev.so.24.0 00:03:26.589 [320/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:26.589 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:26.589 [322/710] Linking static target lib/librte_lpm.a 00:03:26.589 [323/710] Linking target lib/librte_dispatcher.so.24.0 00:03:26.589 [324/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:26.589 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:26.847 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:26.847 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:26.847 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:26.847 [329/710] Linking static target lib/librte_pcapng.a 00:03:26.847 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.104 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:27.104 [332/710] Linking target lib/librte_lpm.so.24.0 00:03:27.104 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:27.104 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:27.104 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.104 [336/710] Linking target lib/librte_pcapng.so.24.0 00:03:27.362 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:27.362 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:27.362 [339/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:27.632 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:27.632 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:27.632 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:27.632 [343/710] Linking static target lib/librte_power.a 00:03:27.632 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:27.632 [345/710] Linking static target lib/librte_rawdev.a 00:03:27.632 [346/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:27.632 [347/710] Linking static target lib/librte_member.a 00:03:27.902 [348/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:27.902 [349/710] Linking static target lib/librte_regexdev.a 00:03:27.902 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:27.902 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:27.902 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:28.160 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:28.160 [354/710] Linking static target lib/librte_mldev.a 00:03:28.160 [355/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.160 [356/710] Linking target lib/librte_member.so.24.0 00:03:28.160 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.419 [358/710] Linking target lib/librte_rawdev.so.24.0 00:03:28.419 [359/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:28.419 [360/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:28.419 [361/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.419 [362/710] Linking target lib/librte_power.so.24.0 00:03:28.678 [363/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:28.678 [364/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.678 [365/710] Linking target lib/librte_regexdev.so.24.0 00:03:28.678 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:28.678 [367/710] Linking static target lib/librte_reorder.a 00:03:28.678 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:28.936 [369/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:28.936 [370/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:28.936 [371/710] Linking static target lib/librte_rib.a 00:03:28.936 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:28.936 [373/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.936 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:29.195 [375/710] Linking target lib/librte_reorder.so.24.0 00:03:29.195 [376/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:29.195 [377/710] Linking static target lib/librte_stack.a 00:03:29.195 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:29.195 [379/710] Linking static target lib/librte_security.a 00:03:29.195 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:29.195 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.195 [382/710] Linking target lib/librte_rib.so.24.0 00:03:29.453 [383/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.453 [384/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.453 [385/710] Linking target lib/librte_stack.so.24.0 00:03:29.453 [386/710] Linking target lib/librte_mldev.so.24.0 00:03:29.453 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:29.453 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:29.712 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.712 [390/710] Linking target lib/librte_security.so.24.0 00:03:29.712 [391/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:29.712 [392/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:29.970 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:29.970 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:29.970 [395/710] Linking static target lib/librte_sched.a 00:03:30.229 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.229 [397/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:30.229 [398/710] Linking target lib/librte_sched.so.24.0 00:03:30.487 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:30.487 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:30.746 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:30.746 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:31.004 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:31.004 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:31.004 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:31.263 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:31.263 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:31.521 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:31.521 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:31.778 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:31.778 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:31.778 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:31.778 [413/710] Linking static target lib/librte_ipsec.a 00:03:32.035 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:32.035 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:32.035 [416/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:32.293 [417/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.293 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:32.293 [419/710] Linking target lib/librte_ipsec.so.24.0 00:03:32.293 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:32.293 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:32.293 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:32.293 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:32.859 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:33.117 [425/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:33.117 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:33.117 [427/710] Linking static target lib/librte_pdcp.a 00:03:33.117 [428/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:33.117 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:33.117 [430/710] Linking static target lib/librte_fib.a 00:03:33.118 [431/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:33.118 [432/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:33.376 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.376 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.635 [435/710] Linking target lib/librte_pdcp.so.24.0 00:03:33.635 [436/710] Linking target lib/librte_fib.so.24.0 00:03:33.635 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:34.202 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:34.202 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:34.202 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:34.202 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:34.460 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:34.460 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:34.460 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:34.460 [445/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:34.718 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:34.718 [447/710] Linking static target lib/librte_port.a 00:03:34.977 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:34.977 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:34.977 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:35.235 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:35.235 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.235 [453/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:35.235 [454/710] Linking target lib/librte_port.so.24.0 00:03:35.235 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:35.494 [456/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:35.494 [457/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:35.494 [458/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:35.494 [459/710] Linking static target lib/librte_pdump.a 00:03:35.494 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.752 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:35.752 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:36.010 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:36.010 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:36.268 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:36.268 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:36.268 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:36.268 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:36.526 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:36.526 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:36.785 [471/710] Linking static target lib/librte_table.a 00:03:36.785 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:36.785 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:37.351 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.351 [475/710] Linking target lib/librte_table.so.24.0 00:03:37.351 [476/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:37.351 [477/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:37.351 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:37.616 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:37.616 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:38.212 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:38.212 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:38.212 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:38.212 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:38.212 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:38.490 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:38.748 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:39.006 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:39.006 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:39.006 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:39.006 [491/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:39.006 [492/710] Linking static target lib/librte_graph.a 00:03:39.264 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:39.522 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:39.522 [495/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.780 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:39.780 [497/710] Linking target lib/librte_graph.so.24.0 00:03:39.780 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:39.780 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:40.038 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:40.296 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:40.296 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:40.296 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:40.296 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:40.296 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:40.554 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:40.554 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:40.812 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:41.069 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:41.069 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:41.069 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:41.069 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:41.069 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:41.069 [514/710] Linking static target lib/librte_node.a 00:03:41.327 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:41.327 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.585 [517/710] Linking target lib/librte_node.so.24.0 00:03:41.585 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:41.585 [519/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:41.585 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:41.585 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:41.843 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:41.843 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:41.843 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:41.843 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:41.843 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:41.843 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:42.101 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.101 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.101 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.101 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:42.101 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:42.101 [533/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:42.101 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:42.359 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:42.359 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.359 [537/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:42.359 [538/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:42.359 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:42.359 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:42.618 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:42.618 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.618 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:42.618 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.618 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:42.618 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:43.184 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:43.184 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:43.184 [549/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:43.442 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:43.442 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:44.398 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:44.398 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:44.398 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:44.398 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:44.398 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:44.398 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:44.963 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:44.963 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:45.220 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:45.220 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:45.220 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:45.784 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:45.784 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:45.784 [565/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:46.041 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:46.298 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:46.555 [568/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:46.555 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:46.555 [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:46.555 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:46.555 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:46.555 [573/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:46.813 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:46.813 [575/710] Linking static target lib/librte_vhost.a 00:03:47.071 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:47.071 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:47.071 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:47.329 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:47.329 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:47.329 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:47.329 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:47.893 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:47.893 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:47.893 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:47.893 [586/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:47.893 [587/710] Linking static target drivers/librte_net_i40e.a 00:03:47.893 [588/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:47.893 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:47.893 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:47.894 [591/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:47.894 [592/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.151 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:48.151 [594/710] Linking target lib/librte_vhost.so.24.0 00:03:48.410 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:48.410 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.668 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:48.668 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:48.668 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:48.927 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:49.185 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:49.185 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:49.185 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:49.185 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:49.442 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:49.442 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:49.699 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:49.957 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:49.957 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:50.215 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:50.215 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:50.215 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:50.215 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:50.215 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:50.473 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:50.473 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:50.473 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:50.731 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:50.989 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:50.989 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:50.989 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:51.247 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:51.247 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:51.247 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:51.247 [625/710] Linking static target lib/librte_pipeline.a 00:03:52.180 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:52.180 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:52.180 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:52.180 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:52.180 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:52.437 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:52.437 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:52.437 [633/710] Linking target app/dpdk-dumpcap 00:03:52.437 [634/710] Linking target app/dpdk-graph 00:03:52.695 [635/710] Linking target app/dpdk-pdump 00:03:52.695 [636/710] Linking target app/dpdk-proc-info 00:03:52.695 [637/710] Linking target app/dpdk-test-acl 00:03:52.695 [638/710] Linking target app/dpdk-test-cmdline 00:03:52.953 [639/710] Linking target app/dpdk-test-compress-perf 00:03:52.953 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:52.953 [641/710] Linking target app/dpdk-test-dma-perf 00:03:52.953 [642/710] Linking target app/dpdk-test-fib 00:03:53.211 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:53.211 [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:53.211 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:53.469 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:53.469 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:53.469 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:53.727 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:53.727 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:53.985 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:53.985 [652/710] Linking target app/dpdk-test-gpudev 00:03:53.985 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:53.985 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:53.985 [655/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.985 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:54.242 [657/710] Linking target app/dpdk-test-eventdev 00:03:54.242 [658/710] Linking target lib/librte_pipeline.so.24.0 00:03:54.242 [659/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:54.500 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:54.500 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:54.500 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:54.500 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:54.758 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:54.758 [665/710] Linking target app/dpdk-test-flow-perf 00:03:54.758 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:54.758 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:55.016 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:55.016 [669/710] Linking target app/dpdk-test-bbdev 00:03:55.274 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:55.274 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:55.274 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:55.274 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:55.532 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:55.532 [675/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:55.790 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:55.790 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:56.049 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:56.049 [679/710] Linking target app/dpdk-test-mldev 00:03:56.049 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:56.307 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:56.307 [682/710] Linking target app/dpdk-test-pipeline 00:03:56.565 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:56.824 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:56.824 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:56.824 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:56.824 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:57.082 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:57.341 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:57.341 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:57.599 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:57.599 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:57.599 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:57.857 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:58.116 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:58.116 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:58.374 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:58.374 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:58.633 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:58.633 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:58.633 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:58.892 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:58.892 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:58.892 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:58.892 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:59.151 [706/710] Linking target app/dpdk-test-regex 00:03:59.151 [707/710] Linking target app/dpdk-test-sad 00:03:59.410 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:59.410 [709/710] Linking target app/dpdk-testpmd 00:03:59.669 [710/710] Linking target app/dpdk-test-security-perf 00:03:59.669 17:24:41 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:59.669 17:24:41 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:59.669 17:24:41 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:59.928 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:59.928 [0/1] Installing files. 00:04:00.192 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.192 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.193 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:00.194 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.195 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:00.196 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:00.196 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.196 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.197 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.769 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.769 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.769 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.769 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:00.769 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.769 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:00.769 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.769 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:00.769 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:00.769 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:00.769 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.769 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.770 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.771 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.772 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:00.773 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:00.773 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:00.773 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:00.773 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:00.773 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:00.773 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:00.773 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:00.773 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:00.773 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:00.773 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:00.773 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:00.773 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:00.773 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:00.773 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:00.773 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:00.773 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:00.773 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:00.773 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:00.773 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:00.773 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:00.773 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:00.773 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:00.773 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:00.773 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:00.773 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:00.773 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:00.773 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:00.773 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:00.773 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:00.773 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:00.773 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:00.773 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:00.773 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:00.773 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:00.773 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:00.773 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:00.773 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:00.773 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:00.773 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:00.773 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:00.773 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:00.773 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:00.773 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:00.773 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:00.774 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:00.774 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:00.774 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:00.774 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:00.774 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:00.774 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:00.774 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:00.774 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:00.774 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:00.774 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:00.774 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:00.774 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:00.774 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:00.774 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:00.774 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:00.774 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:00.774 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:00.774 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:00.774 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:00.774 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:00.774 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:00.774 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:00.774 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:00.774 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:00.774 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:00.774 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:00.774 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:00.774 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:00.774 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:00.774 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:00.774 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:00.774 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:00.774 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:00.774 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:00.774 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:00.774 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:00.774 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:00.774 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:00.774 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:00.774 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:00.774 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:00.774 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:00.774 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:00.774 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:00.774 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:00.774 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:00.774 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:00.774 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:00.774 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:00.774 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:00.774 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:00.774 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:00.774 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:00.774 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:00.774 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:00.774 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:00.774 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:00.774 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:00.774 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:00.774 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:00.774 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:00.774 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:00.774 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:00.774 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:00.774 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:00.774 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:00.774 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:00.774 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:00.774 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:00.774 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:00.774 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:00.774 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:00.774 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:00.774 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:00.774 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:00.774 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:00.774 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:00.774 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:00.774 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:00.774 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:00.774 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:00.774 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:00.774 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:00.774 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:00.774 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:00.774 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:00.774 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:00.775 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:00.775 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:00.775 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:00.775 17:24:42 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:04:00.775 17:24:42 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:00.775 00:04:00.775 real 1m0.355s 00:04:00.775 user 7m16.587s 00:04:00.775 sys 1m9.922s 00:04:00.775 17:24:42 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:00.775 ************************************ 00:04:00.775 END TEST build_native_dpdk 00:04:00.775 ************************************ 00:04:00.775 17:24:42 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:00.775 17:24:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:00.775 17:24:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:00.775 17:24:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:00.775 17:24:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:00.775 17:24:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:00.775 17:24:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:00.775 17:24:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:00.775 17:24:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:04:01.033 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:01.033 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:01.033 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:01.291 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:01.551 Using 'verbs' RDMA provider 00:04:14.733 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:29.665 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:29.665 go version go1.21.1 linux/amd64 00:04:29.665 Creating mk/config.mk...done. 00:04:29.665 Creating mk/cc.flags.mk...done. 00:04:29.665 Type 'make' to build. 00:04:29.665 17:25:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:29.665 17:25:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:29.665 17:25:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:29.665 17:25:09 -- common/autotest_common.sh@10 -- $ set +x 00:04:29.665 ************************************ 00:04:29.665 START TEST make 00:04:29.665 ************************************ 00:04:29.665 17:25:09 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:29.665 make[1]: Nothing to be done for 'all'. 00:04:29.665 The Meson build system 00:04:29.665 Version: 1.5.0 00:04:29.665 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:04:29.665 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:29.665 Build type: native build 00:04:29.665 Project name: libvfio-user 00:04:29.665 Project version: 0.0.1 00:04:29.665 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:29.665 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:29.665 Host machine cpu family: x86_64 00:04:29.665 Host machine cpu: x86_64 00:04:29.665 Run-time dependency threads found: YES 00:04:29.665 Library dl found: YES 00:04:29.665 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:29.665 Run-time dependency json-c found: YES 0.17 00:04:29.665 Run-time dependency cmocka found: YES 1.1.7 00:04:29.665 Program pytest-3 found: NO 00:04:29.665 Program flake8 found: NO 00:04:29.665 Program misspell-fixer found: NO 00:04:29.665 Program restructuredtext-lint found: NO 00:04:29.665 Program valgrind found: YES (/usr/bin/valgrind) 00:04:29.665 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:29.665 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:29.665 Compiler for C supports arguments -Wwrite-strings: YES 00:04:29.665 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:29.665 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:04:29.665 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:04:29.665 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:29.665 Build targets in project: 8 00:04:29.665 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:29.665 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:29.665 00:04:29.665 libvfio-user 0.0.1 00:04:29.665 00:04:29.665 User defined options 00:04:29.665 buildtype : debug 00:04:29.665 default_library: shared 00:04:29.665 libdir : /usr/local/lib 00:04:29.665 00:04:29.665 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:29.924 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:30.182 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:30.182 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:30.182 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:30.182 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:30.182 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:30.182 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:30.182 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:30.182 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:30.182 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:30.182 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:30.182 [11/37] Compiling C object samples/server.p/server.c.o 00:04:30.182 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:30.182 [13/37] Compiling C object samples/null.p/null.c.o 00:04:30.182 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:30.182 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:30.441 [16/37] Compiling C object samples/client.p/client.c.o 00:04:30.441 [17/37] Linking target samples/client 00:04:30.441 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:30.441 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:30.441 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:30.441 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:30.441 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:30.441 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:30.441 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:30.441 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:30.441 [26/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:30.441 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:30.441 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:04:30.441 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:30.700 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:30.700 [31/37] Linking target test/unit_tests 00:04:30.700 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:30.700 [33/37] Linking target samples/gpio-pci-idio-16 00:04:30.700 [34/37] Linking target samples/null 00:04:30.700 [35/37] Linking target samples/lspci 00:04:30.700 [36/37] Linking target samples/server 00:04:30.700 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:30.700 INFO: autodetecting backend as ninja 00:04:30.700 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:30.700 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:31.266 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:31.266 ninja: no work to do. 00:05:17.943 CC lib/ut/ut.o 00:05:17.943 CC lib/ut_mock/mock.o 00:05:17.943 CC lib/log/log.o 00:05:17.943 CC lib/log/log_flags.o 00:05:17.943 CC lib/log/log_deprecated.o 00:05:17.943 LIB libspdk_ut_mock.a 00:05:17.943 SO libspdk_ut_mock.so.6.0 00:05:17.943 LIB libspdk_ut.a 00:05:17.943 LIB libspdk_log.a 00:05:17.943 SO libspdk_ut.so.2.0 00:05:17.943 SYMLINK libspdk_ut_mock.so 00:05:17.943 SO libspdk_log.so.7.0 00:05:17.943 SYMLINK libspdk_ut.so 00:05:17.943 SYMLINK libspdk_log.so 00:05:17.943 CC lib/dma/dma.o 00:05:17.943 CXX lib/trace_parser/trace.o 00:05:17.943 CC lib/util/base64.o 00:05:17.943 CC lib/util/bit_array.o 00:05:17.943 CC lib/util/cpuset.o 00:05:17.943 CC lib/util/crc16.o 00:05:17.943 CC lib/util/crc32.o 00:05:17.943 CC lib/util/crc32c.o 00:05:17.943 CC lib/ioat/ioat.o 00:05:17.943 CC lib/vfio_user/host/vfio_user_pci.o 00:05:17.943 CC lib/vfio_user/host/vfio_user.o 00:05:17.943 CC lib/util/crc32_ieee.o 00:05:17.943 CC lib/util/crc64.o 00:05:17.943 CC lib/util/dif.o 00:05:17.943 LIB libspdk_dma.a 00:05:17.943 CC lib/util/fd.o 00:05:17.943 CC lib/util/fd_group.o 00:05:17.943 SO libspdk_dma.so.5.0 00:05:17.943 LIB libspdk_ioat.a 00:05:17.943 SYMLINK libspdk_dma.so 00:05:17.943 CC lib/util/file.o 00:05:17.943 CC lib/util/hexlify.o 00:05:17.943 CC lib/util/iov.o 00:05:17.943 SO libspdk_ioat.so.7.0 00:05:17.943 CC lib/util/math.o 00:05:17.943 CC lib/util/net.o 00:05:17.943 LIB libspdk_vfio_user.a 00:05:17.943 SYMLINK libspdk_ioat.so 00:05:17.943 CC lib/util/pipe.o 00:05:17.943 SO libspdk_vfio_user.so.5.0 00:05:17.943 CC lib/util/strerror_tls.o 00:05:17.943 SYMLINK libspdk_vfio_user.so 00:05:17.943 CC lib/util/string.o 00:05:17.943 CC lib/util/uuid.o 00:05:17.943 CC lib/util/xor.o 00:05:17.943 CC lib/util/zipf.o 00:05:17.943 CC lib/util/md5.o 00:05:17.943 LIB libspdk_util.a 00:05:17.943 SO libspdk_util.so.10.0 00:05:17.943 SYMLINK libspdk_util.so 00:05:17.943 LIB libspdk_trace_parser.a 00:05:17.943 SO libspdk_trace_parser.so.6.0 00:05:17.943 SYMLINK libspdk_trace_parser.so 00:05:17.943 CC lib/rdma_provider/common.o 00:05:17.943 CC lib/env_dpdk/env.o 00:05:17.943 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:17.943 CC lib/env_dpdk/memory.o 00:05:17.943 CC lib/json/json_parse.o 00:05:17.943 CC lib/json/json_util.o 00:05:17.943 CC lib/idxd/idxd.o 00:05:17.943 CC lib/vmd/vmd.o 00:05:17.943 CC lib/conf/conf.o 00:05:17.943 CC lib/rdma_utils/rdma_utils.o 00:05:17.943 CC lib/vmd/led.o 00:05:17.943 LIB libspdk_rdma_provider.a 00:05:17.943 LIB libspdk_conf.a 00:05:17.943 CC lib/env_dpdk/pci.o 00:05:17.943 SO libspdk_rdma_provider.so.6.0 00:05:17.943 CC lib/json/json_write.o 00:05:17.943 SO libspdk_conf.so.6.0 00:05:17.943 LIB libspdk_rdma_utils.a 00:05:17.943 SYMLINK libspdk_rdma_provider.so 00:05:17.943 CC lib/env_dpdk/init.o 00:05:17.943 SYMLINK libspdk_conf.so 00:05:17.943 CC lib/env_dpdk/threads.o 00:05:17.943 SO libspdk_rdma_utils.so.1.0 00:05:17.943 CC lib/env_dpdk/pci_ioat.o 00:05:17.943 SYMLINK libspdk_rdma_utils.so 00:05:17.943 CC lib/env_dpdk/pci_virtio.o 00:05:17.943 CC lib/env_dpdk/pci_vmd.o 00:05:17.943 CC lib/env_dpdk/pci_idxd.o 00:05:17.943 CC lib/env_dpdk/pci_event.o 00:05:17.943 LIB libspdk_json.a 00:05:17.943 CC lib/idxd/idxd_user.o 00:05:17.943 CC lib/env_dpdk/sigbus_handler.o 00:05:17.943 SO libspdk_json.so.6.0 00:05:17.943 CC lib/idxd/idxd_kernel.o 00:05:17.943 LIB libspdk_vmd.a 00:05:17.943 SYMLINK libspdk_json.so 00:05:17.943 CC lib/env_dpdk/pci_dpdk.o 00:05:17.943 SO libspdk_vmd.so.6.0 00:05:17.943 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:17.943 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:17.943 SYMLINK libspdk_vmd.so 00:05:17.943 LIB libspdk_idxd.a 00:05:17.943 CC lib/jsonrpc/jsonrpc_server.o 00:05:17.943 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:17.943 CC lib/jsonrpc/jsonrpc_client.o 00:05:17.943 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:17.943 SO libspdk_idxd.so.12.1 00:05:17.943 SYMLINK libspdk_idxd.so 00:05:17.943 LIB libspdk_jsonrpc.a 00:05:17.943 SO libspdk_jsonrpc.so.6.0 00:05:17.943 SYMLINK libspdk_jsonrpc.so 00:05:17.943 LIB libspdk_env_dpdk.a 00:05:17.943 SO libspdk_env_dpdk.so.15.0 00:05:17.943 CC lib/rpc/rpc.o 00:05:17.943 SYMLINK libspdk_env_dpdk.so 00:05:17.943 LIB libspdk_rpc.a 00:05:17.943 SO libspdk_rpc.so.6.0 00:05:17.943 SYMLINK libspdk_rpc.so 00:05:17.943 CC lib/trace/trace.o 00:05:17.943 CC lib/trace/trace_flags.o 00:05:17.943 CC lib/trace/trace_rpc.o 00:05:17.944 CC lib/keyring/keyring.o 00:05:17.944 CC lib/keyring/keyring_rpc.o 00:05:17.944 CC lib/notify/notify.o 00:05:17.944 CC lib/notify/notify_rpc.o 00:05:18.203 LIB libspdk_notify.a 00:05:18.203 SO libspdk_notify.so.6.0 00:05:18.203 LIB libspdk_keyring.a 00:05:18.461 SYMLINK libspdk_notify.so 00:05:18.461 LIB libspdk_trace.a 00:05:18.461 SO libspdk_keyring.so.2.0 00:05:18.461 SO libspdk_trace.so.11.0 00:05:18.461 SYMLINK libspdk_keyring.so 00:05:18.461 SYMLINK libspdk_trace.so 00:05:18.720 CC lib/thread/thread.o 00:05:18.720 CC lib/thread/iobuf.o 00:05:18.720 CC lib/sock/sock.o 00:05:18.720 CC lib/sock/sock_rpc.o 00:05:19.288 LIB libspdk_sock.a 00:05:19.288 SO libspdk_sock.so.10.0 00:05:19.288 SYMLINK libspdk_sock.so 00:05:19.546 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:19.547 CC lib/nvme/nvme_fabric.o 00:05:19.547 CC lib/nvme/nvme_ns_cmd.o 00:05:19.547 CC lib/nvme/nvme_ctrlr.o 00:05:19.547 CC lib/nvme/nvme_ns.o 00:05:19.547 CC lib/nvme/nvme_pcie_common.o 00:05:19.547 CC lib/nvme/nvme_pcie.o 00:05:19.547 CC lib/nvme/nvme.o 00:05:19.547 CC lib/nvme/nvme_qpair.o 00:05:20.114 LIB libspdk_thread.a 00:05:20.114 SO libspdk_thread.so.10.1 00:05:20.373 SYMLINK libspdk_thread.so 00:05:20.373 CC lib/nvme/nvme_quirks.o 00:05:20.373 CC lib/nvme/nvme_transport.o 00:05:20.373 CC lib/nvme/nvme_discovery.o 00:05:20.373 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:20.632 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:20.632 CC lib/nvme/nvme_tcp.o 00:05:20.632 CC lib/nvme/nvme_opal.o 00:05:20.892 CC lib/nvme/nvme_io_msg.o 00:05:20.892 CC lib/accel/accel.o 00:05:20.892 CC lib/nvme/nvme_poll_group.o 00:05:21.150 CC lib/accel/accel_rpc.o 00:05:21.150 CC lib/nvme/nvme_zns.o 00:05:21.150 CC lib/blob/blobstore.o 00:05:21.150 CC lib/init/json_config.o 00:05:21.409 CC lib/virtio/virtio.o 00:05:21.409 CC lib/virtio/virtio_vhost_user.o 00:05:21.409 CC lib/init/subsystem.o 00:05:21.409 CC lib/init/subsystem_rpc.o 00:05:21.409 CC lib/init/rpc.o 00:05:21.409 CC lib/virtio/virtio_vfio_user.o 00:05:21.668 CC lib/virtio/virtio_pci.o 00:05:21.668 CC lib/accel/accel_sw.o 00:05:21.668 CC lib/blob/request.o 00:05:21.668 LIB libspdk_init.a 00:05:21.668 SO libspdk_init.so.6.0 00:05:21.668 CC lib/blob/zeroes.o 00:05:21.668 SYMLINK libspdk_init.so 00:05:21.668 CC lib/nvme/nvme_stubs.o 00:05:21.927 CC lib/nvme/nvme_auth.o 00:05:21.927 LIB libspdk_accel.a 00:05:21.927 LIB libspdk_virtio.a 00:05:21.927 CC lib/blob/blob_bs_dev.o 00:05:21.927 CC lib/vfu_tgt/tgt_endpoint.o 00:05:21.927 CC lib/nvme/nvme_cuse.o 00:05:21.927 SO libspdk_accel.so.16.0 00:05:21.927 SO libspdk_virtio.so.7.0 00:05:21.927 SYMLINK libspdk_virtio.so 00:05:21.927 CC lib/vfu_tgt/tgt_rpc.o 00:05:21.927 SYMLINK libspdk_accel.so 00:05:21.927 CC lib/fsdev/fsdev.o 00:05:22.186 CC lib/event/app.o 00:05:22.186 CC lib/event/reactor.o 00:05:22.186 CC lib/event/log_rpc.o 00:05:22.186 CC lib/bdev/bdev.o 00:05:22.186 LIB libspdk_vfu_tgt.a 00:05:22.186 SO libspdk_vfu_tgt.so.3.0 00:05:22.445 SYMLINK libspdk_vfu_tgt.so 00:05:22.445 CC lib/bdev/bdev_rpc.o 00:05:22.445 CC lib/bdev/bdev_zone.o 00:05:22.445 CC lib/event/app_rpc.o 00:05:22.445 CC lib/nvme/nvme_vfio_user.o 00:05:22.445 CC lib/fsdev/fsdev_io.o 00:05:22.445 CC lib/fsdev/fsdev_rpc.o 00:05:22.706 CC lib/event/scheduler_static.o 00:05:22.706 CC lib/nvme/nvme_rdma.o 00:05:22.706 CC lib/bdev/part.o 00:05:22.706 CC lib/bdev/scsi_nvme.o 00:05:22.706 LIB libspdk_event.a 00:05:22.706 SO libspdk_event.so.14.0 00:05:22.965 SYMLINK libspdk_event.so 00:05:22.965 LIB libspdk_fsdev.a 00:05:22.965 SO libspdk_fsdev.so.1.0 00:05:22.965 SYMLINK libspdk_fsdev.so 00:05:23.224 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:23.792 LIB libspdk_fuse_dispatcher.a 00:05:23.792 SO libspdk_fuse_dispatcher.so.1.0 00:05:23.792 LIB libspdk_nvme.a 00:05:23.792 LIB libspdk_blob.a 00:05:23.792 SYMLINK libspdk_fuse_dispatcher.so 00:05:24.050 SO libspdk_blob.so.11.0 00:05:24.050 SYMLINK libspdk_blob.so 00:05:24.050 SO libspdk_nvme.so.14.0 00:05:24.309 CC lib/blobfs/tree.o 00:05:24.309 CC lib/blobfs/blobfs.o 00:05:24.309 CC lib/lvol/lvol.o 00:05:24.309 SYMLINK libspdk_nvme.so 00:05:24.568 LIB libspdk_bdev.a 00:05:24.568 SO libspdk_bdev.so.16.0 00:05:24.568 SYMLINK libspdk_bdev.so 00:05:24.826 CC lib/nbd/nbd.o 00:05:24.826 CC lib/nbd/nbd_rpc.o 00:05:24.826 CC lib/scsi/dev.o 00:05:24.826 CC lib/scsi/lun.o 00:05:24.826 CC lib/scsi/port.o 00:05:24.826 CC lib/ublk/ublk.o 00:05:24.826 CC lib/ftl/ftl_core.o 00:05:24.826 CC lib/nvmf/ctrlr.o 00:05:24.826 LIB libspdk_blobfs.a 00:05:25.084 SO libspdk_blobfs.so.10.0 00:05:25.085 CC lib/scsi/scsi.o 00:05:25.085 LIB libspdk_lvol.a 00:05:25.085 CC lib/scsi/scsi_bdev.o 00:05:25.085 SYMLINK libspdk_blobfs.so 00:05:25.085 CC lib/scsi/scsi_pr.o 00:05:25.085 SO libspdk_lvol.so.10.0 00:05:25.085 CC lib/nvmf/ctrlr_discovery.o 00:05:25.085 SYMLINK libspdk_lvol.so 00:05:25.085 CC lib/nvmf/ctrlr_bdev.o 00:05:25.085 CC lib/nvmf/subsystem.o 00:05:25.343 CC lib/nvmf/nvmf.o 00:05:25.343 LIB libspdk_nbd.a 00:05:25.343 CC lib/ftl/ftl_init.o 00:05:25.343 SO libspdk_nbd.so.7.0 00:05:25.343 CC lib/ftl/ftl_layout.o 00:05:25.343 SYMLINK libspdk_nbd.so 00:05:25.343 CC lib/ftl/ftl_debug.o 00:05:25.601 CC lib/scsi/scsi_rpc.o 00:05:25.601 CC lib/nvmf/nvmf_rpc.o 00:05:25.601 CC lib/nvmf/transport.o 00:05:25.601 CC lib/nvmf/tcp.o 00:05:25.601 CC lib/scsi/task.o 00:05:25.601 CC lib/ublk/ublk_rpc.o 00:05:25.601 CC lib/ftl/ftl_io.o 00:05:25.860 CC lib/nvmf/stubs.o 00:05:25.860 LIB libspdk_ublk.a 00:05:25.860 LIB libspdk_scsi.a 00:05:25.860 SO libspdk_ublk.so.3.0 00:05:25.860 SO libspdk_scsi.so.9.0 00:05:25.860 SYMLINK libspdk_ublk.so 00:05:25.860 CC lib/ftl/ftl_sb.o 00:05:25.860 CC lib/ftl/ftl_l2p.o 00:05:25.860 SYMLINK libspdk_scsi.so 00:05:26.118 CC lib/nvmf/mdns_server.o 00:05:26.118 CC lib/ftl/ftl_l2p_flat.o 00:05:26.118 CC lib/ftl/ftl_nv_cache.o 00:05:26.118 CC lib/nvmf/vfio_user.o 00:05:26.118 CC lib/nvmf/rdma.o 00:05:26.377 CC lib/nvmf/auth.o 00:05:26.377 CC lib/iscsi/conn.o 00:05:26.377 CC lib/vhost/vhost.o 00:05:26.377 CC lib/iscsi/init_grp.o 00:05:26.377 CC lib/iscsi/iscsi.o 00:05:26.377 CC lib/iscsi/param.o 00:05:26.636 CC lib/ftl/ftl_band.o 00:05:26.636 CC lib/iscsi/portal_grp.o 00:05:26.895 CC lib/vhost/vhost_rpc.o 00:05:26.895 CC lib/vhost/vhost_scsi.o 00:05:26.895 CC lib/iscsi/tgt_node.o 00:05:26.895 CC lib/vhost/vhost_blk.o 00:05:26.895 CC lib/vhost/rte_vhost_user.o 00:05:26.895 CC lib/ftl/ftl_band_ops.o 00:05:27.154 CC lib/ftl/ftl_writer.o 00:05:27.154 CC lib/ftl/ftl_rq.o 00:05:27.412 CC lib/iscsi/iscsi_subsystem.o 00:05:27.412 CC lib/ftl/ftl_reloc.o 00:05:27.412 CC lib/iscsi/iscsi_rpc.o 00:05:27.412 CC lib/iscsi/task.o 00:05:27.670 CC lib/ftl/ftl_l2p_cache.o 00:05:27.670 CC lib/ftl/ftl_p2l.o 00:05:27.670 CC lib/ftl/ftl_p2l_log.o 00:05:27.670 CC lib/ftl/mngt/ftl_mngt.o 00:05:27.670 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:27.670 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:27.928 LIB libspdk_iscsi.a 00:05:27.928 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:27.928 SO libspdk_iscsi.so.8.0 00:05:27.928 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:27.928 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:27.928 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:27.928 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:27.928 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:27.928 LIB libspdk_vhost.a 00:05:27.928 SYMLINK libspdk_iscsi.so 00:05:27.928 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:28.188 SO libspdk_vhost.so.8.0 00:05:28.188 LIB libspdk_nvmf.a 00:05:28.188 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:28.188 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:28.188 SYMLINK libspdk_vhost.so 00:05:28.188 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:28.188 CC lib/ftl/utils/ftl_conf.o 00:05:28.188 CC lib/ftl/utils/ftl_md.o 00:05:28.188 SO libspdk_nvmf.so.19.0 00:05:28.188 CC lib/ftl/utils/ftl_mempool.o 00:05:28.188 CC lib/ftl/utils/ftl_bitmap.o 00:05:28.188 CC lib/ftl/utils/ftl_property.o 00:05:28.188 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:28.446 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:28.446 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:28.446 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:28.446 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:28.446 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:28.446 SYMLINK libspdk_nvmf.so 00:05:28.446 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:28.446 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:28.705 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:28.705 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:28.705 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:28.705 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:28.705 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:28.705 CC lib/ftl/base/ftl_base_dev.o 00:05:28.705 CC lib/ftl/base/ftl_base_bdev.o 00:05:28.705 CC lib/ftl/ftl_trace.o 00:05:28.964 LIB libspdk_ftl.a 00:05:29.223 SO libspdk_ftl.so.9.0 00:05:29.483 SYMLINK libspdk_ftl.so 00:05:29.742 CC module/env_dpdk/env_dpdk_rpc.o 00:05:29.742 CC module/vfu_device/vfu_virtio.o 00:05:29.742 CC module/scheduler/gscheduler/gscheduler.o 00:05:29.742 CC module/sock/posix/posix.o 00:05:29.742 CC module/blob/bdev/blob_bdev.o 00:05:29.742 CC module/fsdev/aio/fsdev_aio.o 00:05:29.742 CC module/keyring/file/keyring.o 00:05:29.742 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:29.742 CC module/accel/error/accel_error.o 00:05:29.742 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:29.742 LIB libspdk_env_dpdk_rpc.a 00:05:29.742 SO libspdk_env_dpdk_rpc.so.6.0 00:05:30.001 LIB libspdk_scheduler_gscheduler.a 00:05:30.001 SYMLINK libspdk_env_dpdk_rpc.so 00:05:30.001 CC module/keyring/file/keyring_rpc.o 00:05:30.001 LIB libspdk_scheduler_dpdk_governor.a 00:05:30.002 SO libspdk_scheduler_gscheduler.so.4.0 00:05:30.002 CC module/accel/error/accel_error_rpc.o 00:05:30.002 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:30.002 SYMLINK libspdk_scheduler_gscheduler.so 00:05:30.002 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:30.002 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:30.002 CC module/vfu_device/vfu_virtio_blk.o 00:05:30.002 LIB libspdk_scheduler_dynamic.a 00:05:30.002 LIB libspdk_blob_bdev.a 00:05:30.002 LIB libspdk_keyring_file.a 00:05:30.002 SO libspdk_blob_bdev.so.11.0 00:05:30.002 SO libspdk_scheduler_dynamic.so.4.0 00:05:30.261 LIB libspdk_accel_error.a 00:05:30.261 SO libspdk_keyring_file.so.2.0 00:05:30.261 SO libspdk_accel_error.so.2.0 00:05:30.261 CC module/keyring/linux/keyring.o 00:05:30.261 SYMLINK libspdk_blob_bdev.so 00:05:30.261 SYMLINK libspdk_scheduler_dynamic.so 00:05:30.261 CC module/fsdev/aio/linux_aio_mgr.o 00:05:30.261 SYMLINK libspdk_keyring_file.so 00:05:30.261 SYMLINK libspdk_accel_error.so 00:05:30.261 CC module/vfu_device/vfu_virtio_scsi.o 00:05:30.261 CC module/vfu_device/vfu_virtio_rpc.o 00:05:30.261 CC module/keyring/linux/keyring_rpc.o 00:05:30.261 CC module/vfu_device/vfu_virtio_fs.o 00:05:30.261 CC module/accel/ioat/accel_ioat.o 00:05:30.261 CC module/accel/iaa/accel_iaa.o 00:05:30.261 CC module/accel/ioat/accel_ioat_rpc.o 00:05:30.261 CC module/accel/dsa/accel_dsa.o 00:05:30.261 LIB libspdk_fsdev_aio.a 00:05:30.520 LIB libspdk_sock_posix.a 00:05:30.520 SO libspdk_fsdev_aio.so.1.0 00:05:30.520 SO libspdk_sock_posix.so.6.0 00:05:30.520 LIB libspdk_keyring_linux.a 00:05:30.520 SO libspdk_keyring_linux.so.1.0 00:05:30.520 CC module/accel/iaa/accel_iaa_rpc.o 00:05:30.520 SYMLINK libspdk_fsdev_aio.so 00:05:30.520 CC module/accel/dsa/accel_dsa_rpc.o 00:05:30.520 LIB libspdk_accel_ioat.a 00:05:30.520 SYMLINK libspdk_keyring_linux.so 00:05:30.520 SYMLINK libspdk_sock_posix.so 00:05:30.520 SO libspdk_accel_ioat.so.6.0 00:05:30.520 LIB libspdk_vfu_device.a 00:05:30.520 SYMLINK libspdk_accel_ioat.so 00:05:30.520 SO libspdk_vfu_device.so.3.0 00:05:30.779 LIB libspdk_accel_dsa.a 00:05:30.779 LIB libspdk_accel_iaa.a 00:05:30.779 SO libspdk_accel_dsa.so.5.0 00:05:30.779 SO libspdk_accel_iaa.so.3.0 00:05:30.779 SYMLINK libspdk_vfu_device.so 00:05:30.779 SYMLINK libspdk_accel_dsa.so 00:05:30.779 SYMLINK libspdk_accel_iaa.so 00:05:30.779 CC module/bdev/delay/vbdev_delay.o 00:05:30.779 CC module/bdev/gpt/gpt.o 00:05:30.779 CC module/bdev/lvol/vbdev_lvol.o 00:05:30.779 CC module/bdev/error/vbdev_error.o 00:05:30.779 CC module/bdev/malloc/bdev_malloc.o 00:05:30.779 CC module/blobfs/bdev/blobfs_bdev.o 00:05:30.779 CC module/bdev/null/bdev_null.o 00:05:30.779 CC module/bdev/nvme/bdev_nvme.o 00:05:31.037 CC module/bdev/passthru/vbdev_passthru.o 00:05:31.037 CC module/bdev/raid/bdev_raid.o 00:05:31.037 CC module/bdev/gpt/vbdev_gpt.o 00:05:31.037 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:31.037 CC module/bdev/error/vbdev_error_rpc.o 00:05:31.037 CC module/bdev/null/bdev_null_rpc.o 00:05:31.037 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:31.037 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:31.296 LIB libspdk_blobfs_bdev.a 00:05:31.296 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:31.296 SO libspdk_blobfs_bdev.so.6.0 00:05:31.296 LIB libspdk_bdev_gpt.a 00:05:31.296 LIB libspdk_bdev_null.a 00:05:31.296 LIB libspdk_bdev_error.a 00:05:31.296 SO libspdk_bdev_gpt.so.6.0 00:05:31.296 SO libspdk_bdev_error.so.6.0 00:05:31.296 SO libspdk_bdev_null.so.6.0 00:05:31.296 SYMLINK libspdk_blobfs_bdev.so 00:05:31.296 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:31.296 CC module/bdev/raid/bdev_raid_rpc.o 00:05:31.296 LIB libspdk_bdev_delay.a 00:05:31.297 LIB libspdk_bdev_malloc.a 00:05:31.297 SYMLINK libspdk_bdev_error.so 00:05:31.297 SO libspdk_bdev_delay.so.6.0 00:05:31.297 SYMLINK libspdk_bdev_gpt.so 00:05:31.297 SYMLINK libspdk_bdev_null.so 00:05:31.297 SO libspdk_bdev_malloc.so.6.0 00:05:31.297 LIB libspdk_bdev_passthru.a 00:05:31.297 SO libspdk_bdev_passthru.so.6.0 00:05:31.297 SYMLINK libspdk_bdev_delay.so 00:05:31.555 SYMLINK libspdk_bdev_malloc.so 00:05:31.555 CC module/bdev/raid/bdev_raid_sb.o 00:05:31.555 CC module/bdev/raid/raid0.o 00:05:31.555 SYMLINK libspdk_bdev_passthru.so 00:05:31.555 CC module/bdev/raid/raid1.o 00:05:31.555 CC module/bdev/raid/concat.o 00:05:31.555 CC module/bdev/split/vbdev_split.o 00:05:31.555 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:31.555 CC module/bdev/aio/bdev_aio.o 00:05:31.555 LIB libspdk_bdev_lvol.a 00:05:31.555 SO libspdk_bdev_lvol.so.6.0 00:05:31.814 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:31.814 SYMLINK libspdk_bdev_lvol.so 00:05:31.814 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:31.814 CC module/bdev/split/vbdev_split_rpc.o 00:05:31.814 CC module/bdev/aio/bdev_aio_rpc.o 00:05:31.814 LIB libspdk_bdev_raid.a 00:05:31.814 CC module/bdev/iscsi/bdev_iscsi.o 00:05:31.814 CC module/bdev/ftl/bdev_ftl.o 00:05:31.814 LIB libspdk_bdev_zone_block.a 00:05:31.814 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:31.814 SO libspdk_bdev_raid.so.6.0 00:05:31.814 SO libspdk_bdev_zone_block.so.6.0 00:05:32.073 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:32.073 LIB libspdk_bdev_split.a 00:05:32.073 SO libspdk_bdev_split.so.6.0 00:05:32.073 SYMLINK libspdk_bdev_zone_block.so 00:05:32.073 CC module/bdev/nvme/nvme_rpc.o 00:05:32.073 SYMLINK libspdk_bdev_raid.so 00:05:32.073 CC module/bdev/nvme/bdev_mdns_client.o 00:05:32.073 LIB libspdk_bdev_aio.a 00:05:32.073 SYMLINK libspdk_bdev_split.so 00:05:32.073 CC module/bdev/nvme/vbdev_opal.o 00:05:32.073 SO libspdk_bdev_aio.so.6.0 00:05:32.073 SYMLINK libspdk_bdev_aio.so 00:05:32.073 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:32.073 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:32.073 LIB libspdk_bdev_ftl.a 00:05:32.073 SO libspdk_bdev_ftl.so.6.0 00:05:32.332 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:32.332 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:32.332 SYMLINK libspdk_bdev_ftl.so 00:05:32.332 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:32.332 LIB libspdk_bdev_iscsi.a 00:05:32.332 SO libspdk_bdev_iscsi.so.6.0 00:05:32.332 LIB libspdk_bdev_virtio.a 00:05:32.590 SYMLINK libspdk_bdev_iscsi.so 00:05:32.590 SO libspdk_bdev_virtio.so.6.0 00:05:32.590 SYMLINK libspdk_bdev_virtio.so 00:05:32.849 LIB libspdk_bdev_nvme.a 00:05:33.107 SO libspdk_bdev_nvme.so.7.0 00:05:33.107 SYMLINK libspdk_bdev_nvme.so 00:05:33.675 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:33.675 CC module/event/subsystems/iobuf/iobuf.o 00:05:33.675 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:33.675 CC module/event/subsystems/keyring/keyring.o 00:05:33.675 CC module/event/subsystems/scheduler/scheduler.o 00:05:33.675 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:33.675 CC module/event/subsystems/fsdev/fsdev.o 00:05:33.675 CC module/event/subsystems/vmd/vmd.o 00:05:33.675 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:33.675 CC module/event/subsystems/sock/sock.o 00:05:33.675 LIB libspdk_event_keyring.a 00:05:33.675 LIB libspdk_event_vhost_blk.a 00:05:33.675 LIB libspdk_event_fsdev.a 00:05:33.675 SO libspdk_event_keyring.so.1.0 00:05:33.675 LIB libspdk_event_sock.a 00:05:33.675 SO libspdk_event_vhost_blk.so.3.0 00:05:33.675 LIB libspdk_event_vmd.a 00:05:33.675 SO libspdk_event_fsdev.so.1.0 00:05:33.675 LIB libspdk_event_vfu_tgt.a 00:05:33.675 LIB libspdk_event_iobuf.a 00:05:33.675 LIB libspdk_event_scheduler.a 00:05:33.675 SO libspdk_event_sock.so.5.0 00:05:33.675 SO libspdk_event_vmd.so.6.0 00:05:33.675 SO libspdk_event_vfu_tgt.so.3.0 00:05:33.675 SYMLINK libspdk_event_keyring.so 00:05:33.675 SYMLINK libspdk_event_vhost_blk.so 00:05:33.675 SO libspdk_event_scheduler.so.4.0 00:05:33.675 SO libspdk_event_iobuf.so.3.0 00:05:33.675 SYMLINK libspdk_event_fsdev.so 00:05:33.675 SYMLINK libspdk_event_sock.so 00:05:33.675 SYMLINK libspdk_event_vfu_tgt.so 00:05:33.675 SYMLINK libspdk_event_vmd.so 00:05:33.675 SYMLINK libspdk_event_scheduler.so 00:05:33.934 SYMLINK libspdk_event_iobuf.so 00:05:34.193 CC module/event/subsystems/accel/accel.o 00:05:34.193 LIB libspdk_event_accel.a 00:05:34.193 SO libspdk_event_accel.so.6.0 00:05:34.193 SYMLINK libspdk_event_accel.so 00:05:34.451 CC module/event/subsystems/bdev/bdev.o 00:05:34.710 LIB libspdk_event_bdev.a 00:05:34.710 SO libspdk_event_bdev.so.6.0 00:05:34.968 SYMLINK libspdk_event_bdev.so 00:05:34.968 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:35.227 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:35.227 CC module/event/subsystems/scsi/scsi.o 00:05:35.227 CC module/event/subsystems/nbd/nbd.o 00:05:35.227 CC module/event/subsystems/ublk/ublk.o 00:05:35.227 LIB libspdk_event_nbd.a 00:05:35.227 LIB libspdk_event_ublk.a 00:05:35.227 LIB libspdk_event_scsi.a 00:05:35.227 SO libspdk_event_nbd.so.6.0 00:05:35.227 SO libspdk_event_ublk.so.3.0 00:05:35.227 SO libspdk_event_scsi.so.6.0 00:05:35.486 SYMLINK libspdk_event_nbd.so 00:05:35.486 SYMLINK libspdk_event_ublk.so 00:05:35.486 LIB libspdk_event_nvmf.a 00:05:35.486 SYMLINK libspdk_event_scsi.so 00:05:35.486 SO libspdk_event_nvmf.so.6.0 00:05:35.486 SYMLINK libspdk_event_nvmf.so 00:05:35.745 CC module/event/subsystems/iscsi/iscsi.o 00:05:35.745 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:35.745 LIB libspdk_event_vhost_scsi.a 00:05:35.745 LIB libspdk_event_iscsi.a 00:05:36.003 SO libspdk_event_vhost_scsi.so.3.0 00:05:36.003 SO libspdk_event_iscsi.so.6.0 00:05:36.003 SYMLINK libspdk_event_vhost_scsi.so 00:05:36.003 SYMLINK libspdk_event_iscsi.so 00:05:36.003 SO libspdk.so.6.0 00:05:36.262 SYMLINK libspdk.so 00:05:36.262 CXX app/trace/trace.o 00:05:36.521 CC app/spdk_lspci/spdk_lspci.o 00:05:36.521 CC app/trace_record/trace_record.o 00:05:36.521 CC app/spdk_nvme_perf/perf.o 00:05:36.521 CC app/iscsi_tgt/iscsi_tgt.o 00:05:36.521 CC app/nvmf_tgt/nvmf_main.o 00:05:36.521 CC app/spdk_tgt/spdk_tgt.o 00:05:36.521 CC examples/util/zipf/zipf.o 00:05:36.521 CC examples/ioat/perf/perf.o 00:05:36.521 CC test/thread/poller_perf/poller_perf.o 00:05:36.521 LINK spdk_lspci 00:05:36.779 LINK nvmf_tgt 00:05:36.779 LINK zipf 00:05:36.779 LINK poller_perf 00:05:36.779 LINK iscsi_tgt 00:05:36.779 LINK spdk_trace_record 00:05:36.779 LINK ioat_perf 00:05:36.779 LINK spdk_tgt 00:05:36.779 CC app/spdk_nvme_identify/identify.o 00:05:36.779 LINK spdk_trace 00:05:37.038 CC app/spdk_nvme_discover/discovery_aer.o 00:05:37.038 CC examples/ioat/verify/verify.o 00:05:37.038 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:37.038 CC app/spdk_top/spdk_top.o 00:05:37.038 CC test/dma/test_dma/test_dma.o 00:05:37.038 CC app/spdk_dd/spdk_dd.o 00:05:37.038 LINK spdk_nvme_discover 00:05:37.297 LINK interrupt_tgt 00:05:37.297 LINK verify 00:05:37.297 CC app/fio/nvme/fio_plugin.o 00:05:37.297 LINK spdk_nvme_perf 00:05:37.556 TEST_HEADER include/spdk/accel.h 00:05:37.556 TEST_HEADER include/spdk/accel_module.h 00:05:37.556 TEST_HEADER include/spdk/assert.h 00:05:37.556 TEST_HEADER include/spdk/barrier.h 00:05:37.556 TEST_HEADER include/spdk/base64.h 00:05:37.556 TEST_HEADER include/spdk/bdev.h 00:05:37.556 TEST_HEADER include/spdk/bdev_module.h 00:05:37.556 TEST_HEADER include/spdk/bdev_zone.h 00:05:37.556 TEST_HEADER include/spdk/bit_array.h 00:05:37.556 TEST_HEADER include/spdk/bit_pool.h 00:05:37.556 TEST_HEADER include/spdk/blob_bdev.h 00:05:37.556 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:37.556 TEST_HEADER include/spdk/blobfs.h 00:05:37.556 TEST_HEADER include/spdk/blob.h 00:05:37.556 TEST_HEADER include/spdk/conf.h 00:05:37.556 TEST_HEADER include/spdk/config.h 00:05:37.556 TEST_HEADER include/spdk/cpuset.h 00:05:37.556 TEST_HEADER include/spdk/crc16.h 00:05:37.556 TEST_HEADER include/spdk/crc32.h 00:05:37.556 TEST_HEADER include/spdk/crc64.h 00:05:37.556 TEST_HEADER include/spdk/dif.h 00:05:37.556 TEST_HEADER include/spdk/dma.h 00:05:37.556 TEST_HEADER include/spdk/endian.h 00:05:37.556 TEST_HEADER include/spdk/env_dpdk.h 00:05:37.556 TEST_HEADER include/spdk/env.h 00:05:37.556 TEST_HEADER include/spdk/event.h 00:05:37.556 TEST_HEADER include/spdk/fd_group.h 00:05:37.556 CC app/vhost/vhost.o 00:05:37.556 TEST_HEADER include/spdk/fd.h 00:05:37.556 TEST_HEADER include/spdk/file.h 00:05:37.556 TEST_HEADER include/spdk/fsdev.h 00:05:37.556 TEST_HEADER include/spdk/fsdev_module.h 00:05:37.556 LINK spdk_dd 00:05:37.556 TEST_HEADER include/spdk/ftl.h 00:05:37.556 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:37.556 TEST_HEADER include/spdk/gpt_spec.h 00:05:37.556 CC test/app/bdev_svc/bdev_svc.o 00:05:37.556 TEST_HEADER include/spdk/hexlify.h 00:05:37.556 TEST_HEADER include/spdk/histogram_data.h 00:05:37.556 TEST_HEADER include/spdk/idxd.h 00:05:37.556 TEST_HEADER include/spdk/idxd_spec.h 00:05:37.556 TEST_HEADER include/spdk/init.h 00:05:37.556 TEST_HEADER include/spdk/ioat.h 00:05:37.556 TEST_HEADER include/spdk/ioat_spec.h 00:05:37.556 TEST_HEADER include/spdk/iscsi_spec.h 00:05:37.556 TEST_HEADER include/spdk/json.h 00:05:37.556 TEST_HEADER include/spdk/jsonrpc.h 00:05:37.556 TEST_HEADER include/spdk/keyring.h 00:05:37.556 TEST_HEADER include/spdk/keyring_module.h 00:05:37.556 TEST_HEADER include/spdk/likely.h 00:05:37.556 TEST_HEADER include/spdk/log.h 00:05:37.556 TEST_HEADER include/spdk/lvol.h 00:05:37.556 TEST_HEADER include/spdk/md5.h 00:05:37.556 TEST_HEADER include/spdk/memory.h 00:05:37.556 TEST_HEADER include/spdk/mmio.h 00:05:37.556 TEST_HEADER include/spdk/nbd.h 00:05:37.556 CC examples/thread/thread/thread_ex.o 00:05:37.556 TEST_HEADER include/spdk/net.h 00:05:37.556 TEST_HEADER include/spdk/notify.h 00:05:37.556 TEST_HEADER include/spdk/nvme.h 00:05:37.556 TEST_HEADER include/spdk/nvme_intel.h 00:05:37.556 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:37.556 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:37.556 TEST_HEADER include/spdk/nvme_spec.h 00:05:37.556 TEST_HEADER include/spdk/nvme_zns.h 00:05:37.556 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:37.556 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:37.556 TEST_HEADER include/spdk/nvmf.h 00:05:37.556 TEST_HEADER include/spdk/nvmf_spec.h 00:05:37.556 TEST_HEADER include/spdk/nvmf_transport.h 00:05:37.556 TEST_HEADER include/spdk/opal.h 00:05:37.556 TEST_HEADER include/spdk/opal_spec.h 00:05:37.556 TEST_HEADER include/spdk/pci_ids.h 00:05:37.556 TEST_HEADER include/spdk/pipe.h 00:05:37.556 TEST_HEADER include/spdk/queue.h 00:05:37.556 TEST_HEADER include/spdk/reduce.h 00:05:37.556 TEST_HEADER include/spdk/rpc.h 00:05:37.556 TEST_HEADER include/spdk/scheduler.h 00:05:37.556 TEST_HEADER include/spdk/scsi.h 00:05:37.556 TEST_HEADER include/spdk/scsi_spec.h 00:05:37.556 TEST_HEADER include/spdk/sock.h 00:05:37.556 TEST_HEADER include/spdk/stdinc.h 00:05:37.556 LINK test_dma 00:05:37.556 TEST_HEADER include/spdk/string.h 00:05:37.556 TEST_HEADER include/spdk/thread.h 00:05:37.556 TEST_HEADER include/spdk/trace.h 00:05:37.556 TEST_HEADER include/spdk/trace_parser.h 00:05:37.556 TEST_HEADER include/spdk/tree.h 00:05:37.556 TEST_HEADER include/spdk/ublk.h 00:05:37.556 TEST_HEADER include/spdk/util.h 00:05:37.556 TEST_HEADER include/spdk/uuid.h 00:05:37.556 TEST_HEADER include/spdk/version.h 00:05:37.556 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:37.556 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:37.556 TEST_HEADER include/spdk/vhost.h 00:05:37.556 LINK spdk_nvme_identify 00:05:37.556 TEST_HEADER include/spdk/vmd.h 00:05:37.556 TEST_HEADER include/spdk/xor.h 00:05:37.556 TEST_HEADER include/spdk/zipf.h 00:05:37.556 CXX test/cpp_headers/accel.o 00:05:37.815 LINK vhost 00:05:37.815 LINK bdev_svc 00:05:37.815 LINK spdk_nvme 00:05:37.815 CXX test/cpp_headers/accel_module.o 00:05:37.815 LINK thread 00:05:37.815 CC app/fio/bdev/fio_plugin.o 00:05:37.815 LINK spdk_top 00:05:37.815 CC examples/sock/hello_world/hello_sock.o 00:05:38.074 CXX test/cpp_headers/assert.o 00:05:38.074 CC test/env/vtophys/vtophys.o 00:05:38.074 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:38.074 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:38.074 CC test/env/mem_callbacks/mem_callbacks.o 00:05:38.074 CC test/env/memory/memory_ut.o 00:05:38.332 LINK env_dpdk_post_init 00:05:38.332 CXX test/cpp_headers/barrier.o 00:05:38.332 LINK vtophys 00:05:38.332 LINK hello_sock 00:05:38.332 CC examples/vmd/lsvmd/lsvmd.o 00:05:38.332 LINK spdk_bdev 00:05:38.332 CXX test/cpp_headers/base64.o 00:05:38.591 CC test/env/pci/pci_ut.o 00:05:38.591 LINK lsvmd 00:05:38.591 CC test/app/histogram_perf/histogram_perf.o 00:05:38.591 LINK nvme_fuzz 00:05:38.591 CXX test/cpp_headers/bdev.o 00:05:38.591 CC examples/idxd/perf/perf.o 00:05:38.591 LINK mem_callbacks 00:05:38.591 LINK histogram_perf 00:05:38.850 CC test/event/event_perf/event_perf.o 00:05:38.850 CC examples/vmd/led/led.o 00:05:38.850 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:38.850 CXX test/cpp_headers/bdev_module.o 00:05:38.850 LINK pci_ut 00:05:38.850 LINK event_perf 00:05:38.850 CC test/rpc_client/rpc_client_test.o 00:05:38.850 LINK led 00:05:39.110 LINK idxd_perf 00:05:39.110 CC test/nvme/aer/aer.o 00:05:39.110 CXX test/cpp_headers/bdev_zone.o 00:05:39.110 LINK rpc_client_test 00:05:39.110 CC test/event/reactor/reactor.o 00:05:39.110 CC test/event/reactor_perf/reactor_perf.o 00:05:39.110 CC test/event/app_repeat/app_repeat.o 00:05:39.369 CC test/event/scheduler/scheduler.o 00:05:39.369 LINK reactor 00:05:39.369 LINK memory_ut 00:05:39.369 LINK reactor_perf 00:05:39.369 LINK aer 00:05:39.369 CXX test/cpp_headers/bit_array.o 00:05:39.369 LINK app_repeat 00:05:39.369 CXX test/cpp_headers/bit_pool.o 00:05:39.628 LINK scheduler 00:05:39.628 CC test/accel/dif/dif.o 00:05:39.628 CC test/app/jsoncat/jsoncat.o 00:05:39.628 CXX test/cpp_headers/blob_bdev.o 00:05:39.628 CC test/nvme/reset/reset.o 00:05:39.628 CC test/nvme/e2edp/nvme_dp.o 00:05:39.628 CC test/nvme/sgl/sgl.o 00:05:39.628 CC test/nvme/overhead/overhead.o 00:05:39.628 LINK jsoncat 00:05:39.887 CXX test/cpp_headers/blobfs_bdev.o 00:05:39.887 CC test/nvme/err_injection/err_injection.o 00:05:39.887 CXX test/cpp_headers/blobfs.o 00:05:39.887 LINK nvme_dp 00:05:40.147 LINK overhead 00:05:40.147 LINK sgl 00:05:40.147 LINK reset 00:05:40.147 CXX test/cpp_headers/blob.o 00:05:40.147 LINK err_injection 00:05:40.147 CXX test/cpp_headers/conf.o 00:05:40.147 CXX test/cpp_headers/config.o 00:05:40.147 CC test/nvme/startup/startup.o 00:05:40.147 CC test/blobfs/mkfs/mkfs.o 00:05:40.147 LINK dif 00:05:40.406 CC test/app/stub/stub.o 00:05:40.406 CXX test/cpp_headers/cpuset.o 00:05:40.406 CC test/nvme/reserve/reserve.o 00:05:40.406 CC test/nvme/simple_copy/simple_copy.o 00:05:40.406 LINK startup 00:05:40.406 LINK mkfs 00:05:40.406 CXX test/cpp_headers/crc16.o 00:05:40.406 LINK iscsi_fuzz 00:05:40.406 CC test/lvol/esnap/esnap.o 00:05:40.406 LINK stub 00:05:40.665 CXX test/cpp_headers/crc32.o 00:05:40.665 LINK reserve 00:05:40.665 LINK simple_copy 00:05:40.665 CXX test/cpp_headers/crc64.o 00:05:40.665 CXX test/cpp_headers/dif.o 00:05:40.924 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:40.924 CC test/bdev/bdevio/bdevio.o 00:05:40.924 CXX test/cpp_headers/dma.o 00:05:40.924 CXX test/cpp_headers/endian.o 00:05:41.182 CC test/nvme/connect_stress/connect_stress.o 00:05:41.182 CC test/nvme/boot_partition/boot_partition.o 00:05:41.182 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:41.182 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:41.182 CC examples/accel/perf/accel_perf.o 00:05:41.182 CXX test/cpp_headers/env_dpdk.o 00:05:41.182 LINK connect_stress 00:05:41.441 LINK boot_partition 00:05:41.441 LINK bdevio 00:05:41.441 CXX test/cpp_headers/env.o 00:05:41.441 CC test/nvme/compliance/nvme_compliance.o 00:05:41.441 LINK hello_fsdev 00:05:41.699 CC test/nvme/fused_ordering/fused_ordering.o 00:05:41.699 CXX test/cpp_headers/event.o 00:05:41.699 LINK vhost_fuzz 00:05:41.699 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:41.699 CC test/nvme/fdp/fdp.o 00:05:41.699 CXX test/cpp_headers/fd_group.o 00:05:41.699 LINK nvme_compliance 00:05:41.699 LINK accel_perf 00:05:41.958 LINK fused_ordering 00:05:41.958 CXX test/cpp_headers/fd.o 00:05:41.958 LINK doorbell_aers 00:05:41.958 CC examples/blob/hello_world/hello_blob.o 00:05:41.958 CXX test/cpp_headers/file.o 00:05:41.958 CC examples/nvme/hello_world/hello_world.o 00:05:41.958 CXX test/cpp_headers/fsdev.o 00:05:41.958 CC examples/nvme/reconnect/reconnect.o 00:05:42.216 LINK fdp 00:05:42.216 CXX test/cpp_headers/fsdev_module.o 00:05:42.216 CXX test/cpp_headers/ftl.o 00:05:42.216 LINK hello_blob 00:05:42.216 LINK hello_world 00:05:42.216 CC test/nvme/cuse/cuse.o 00:05:42.475 CC examples/blob/cli/blobcli.o 00:05:42.475 LINK reconnect 00:05:42.475 CXX test/cpp_headers/fuse_dispatcher.o 00:05:42.475 CC examples/bdev/hello_world/hello_bdev.o 00:05:42.475 CC examples/bdev/bdevperf/bdevperf.o 00:05:42.475 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:42.475 CC examples/nvme/arbitration/arbitration.o 00:05:42.733 CXX test/cpp_headers/gpt_spec.o 00:05:42.733 CC examples/nvme/hotplug/hotplug.o 00:05:42.733 LINK hello_bdev 00:05:42.733 CXX test/cpp_headers/hexlify.o 00:05:42.733 LINK blobcli 00:05:42.733 LINK hotplug 00:05:42.733 CXX test/cpp_headers/histogram_data.o 00:05:42.992 LINK arbitration 00:05:42.992 CXX test/cpp_headers/idxd.o 00:05:42.992 CXX test/cpp_headers/idxd_spec.o 00:05:42.992 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:42.992 LINK nvme_manage 00:05:42.992 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:42.992 CC examples/nvme/abort/abort.o 00:05:43.253 CXX test/cpp_headers/init.o 00:05:43.253 CXX test/cpp_headers/ioat.o 00:05:43.253 CXX test/cpp_headers/ioat_spec.o 00:05:43.253 LINK bdevperf 00:05:43.253 LINK cmb_copy 00:05:43.253 LINK pmr_persistence 00:05:43.253 CXX test/cpp_headers/iscsi_spec.o 00:05:43.512 CXX test/cpp_headers/json.o 00:05:43.512 CXX test/cpp_headers/jsonrpc.o 00:05:43.512 CXX test/cpp_headers/keyring.o 00:05:43.512 CXX test/cpp_headers/keyring_module.o 00:05:43.512 LINK abort 00:05:43.512 CXX test/cpp_headers/likely.o 00:05:43.512 CXX test/cpp_headers/log.o 00:05:43.512 CXX test/cpp_headers/lvol.o 00:05:43.512 CXX test/cpp_headers/md5.o 00:05:43.770 CXX test/cpp_headers/memory.o 00:05:43.770 LINK cuse 00:05:43.770 CXX test/cpp_headers/mmio.o 00:05:43.770 CXX test/cpp_headers/nbd.o 00:05:43.770 CXX test/cpp_headers/net.o 00:05:43.770 CXX test/cpp_headers/notify.o 00:05:43.770 CXX test/cpp_headers/nvme.o 00:05:43.770 CXX test/cpp_headers/nvme_intel.o 00:05:43.770 CXX test/cpp_headers/nvme_ocssd.o 00:05:43.770 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:43.770 CXX test/cpp_headers/nvme_spec.o 00:05:43.770 CXX test/cpp_headers/nvme_zns.o 00:05:43.770 CXX test/cpp_headers/nvmf_cmd.o 00:05:43.770 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:43.770 CXX test/cpp_headers/nvmf.o 00:05:44.028 CXX test/cpp_headers/nvmf_spec.o 00:05:44.028 CC examples/nvmf/nvmf/nvmf.o 00:05:44.028 CXX test/cpp_headers/nvmf_transport.o 00:05:44.028 CXX test/cpp_headers/opal.o 00:05:44.028 CXX test/cpp_headers/opal_spec.o 00:05:44.028 CXX test/cpp_headers/pci_ids.o 00:05:44.028 CXX test/cpp_headers/pipe.o 00:05:44.028 CXX test/cpp_headers/queue.o 00:05:44.028 CXX test/cpp_headers/reduce.o 00:05:44.028 CXX test/cpp_headers/rpc.o 00:05:44.028 CXX test/cpp_headers/scheduler.o 00:05:44.286 CXX test/cpp_headers/scsi.o 00:05:44.287 CXX test/cpp_headers/scsi_spec.o 00:05:44.287 CXX test/cpp_headers/sock.o 00:05:44.287 CXX test/cpp_headers/stdinc.o 00:05:44.287 CXX test/cpp_headers/string.o 00:05:44.287 CXX test/cpp_headers/thread.o 00:05:44.287 CXX test/cpp_headers/trace.o 00:05:44.287 CXX test/cpp_headers/trace_parser.o 00:05:44.287 LINK nvmf 00:05:44.287 CXX test/cpp_headers/tree.o 00:05:44.287 CXX test/cpp_headers/ublk.o 00:05:44.287 CXX test/cpp_headers/util.o 00:05:44.287 CXX test/cpp_headers/uuid.o 00:05:44.287 CXX test/cpp_headers/version.o 00:05:44.287 CXX test/cpp_headers/vfio_user_pci.o 00:05:44.287 CXX test/cpp_headers/vfio_user_spec.o 00:05:44.287 CXX test/cpp_headers/vhost.o 00:05:44.545 CXX test/cpp_headers/vmd.o 00:05:44.545 CXX test/cpp_headers/xor.o 00:05:44.545 CXX test/cpp_headers/zipf.o 00:05:45.480 LINK esnap 00:05:45.737 00:05:45.737 real 1m17.939s 00:05:45.737 user 6m33.480s 00:05:45.737 sys 1m18.980s 00:05:45.737 17:26:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:45.737 17:26:27 make -- common/autotest_common.sh@10 -- $ set +x 00:05:45.737 ************************************ 00:05:45.737 END TEST make 00:05:45.737 ************************************ 00:05:45.737 17:26:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:45.737 17:26:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:45.737 17:26:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:45.737 17:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:45.737 17:26:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:45.737 17:26:27 -- pm/common@44 -- $ pid=6027 00:05:45.737 17:26:27 -- pm/common@50 -- $ kill -TERM 6027 00:05:45.737 17:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:45.737 17:26:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:45.737 17:26:27 -- pm/common@44 -- $ pid=6029 00:05:45.737 17:26:27 -- pm/common@50 -- $ kill -TERM 6029 00:05:45.995 17:26:27 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.995 17:26:27 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.995 17:26:27 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.995 17:26:27 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.995 17:26:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.995 17:26:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.995 17:26:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.995 17:26:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.995 17:26:27 -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.995 17:26:27 -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.995 17:26:27 -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.995 17:26:27 -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.995 17:26:27 -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.995 17:26:27 -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.995 17:26:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.995 17:26:27 -- scripts/common.sh@344 -- # case "$op" in 00:05:45.995 17:26:27 -- scripts/common.sh@345 -- # : 1 00:05:45.995 17:26:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.995 17:26:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.995 17:26:27 -- scripts/common.sh@365 -- # decimal 1 00:05:45.995 17:26:27 -- scripts/common.sh@353 -- # local d=1 00:05:45.995 17:26:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.995 17:26:27 -- scripts/common.sh@355 -- # echo 1 00:05:45.995 17:26:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.995 17:26:27 -- scripts/common.sh@366 -- # decimal 2 00:05:45.995 17:26:27 -- scripts/common.sh@353 -- # local d=2 00:05:45.995 17:26:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.995 17:26:27 -- scripts/common.sh@355 -- # echo 2 00:05:45.995 17:26:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.995 17:26:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.995 17:26:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.995 17:26:27 -- scripts/common.sh@368 -- # return 0 00:05:45.995 17:26:27 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.995 17:26:27 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.995 --rc genhtml_branch_coverage=1 00:05:45.995 --rc genhtml_function_coverage=1 00:05:45.995 --rc genhtml_legend=1 00:05:45.995 --rc geninfo_all_blocks=1 00:05:45.995 --rc geninfo_unexecuted_blocks=1 00:05:45.995 00:05:45.995 ' 00:05:45.995 17:26:27 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.995 --rc genhtml_branch_coverage=1 00:05:45.995 --rc genhtml_function_coverage=1 00:05:45.995 --rc genhtml_legend=1 00:05:45.995 --rc geninfo_all_blocks=1 00:05:45.995 --rc geninfo_unexecuted_blocks=1 00:05:45.995 00:05:45.995 ' 00:05:45.995 17:26:27 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.995 --rc genhtml_branch_coverage=1 00:05:45.995 --rc genhtml_function_coverage=1 00:05:45.995 --rc genhtml_legend=1 00:05:45.995 --rc geninfo_all_blocks=1 00:05:45.995 --rc geninfo_unexecuted_blocks=1 00:05:45.995 00:05:45.995 ' 00:05:45.995 17:26:27 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.995 --rc genhtml_branch_coverage=1 00:05:45.995 --rc genhtml_function_coverage=1 00:05:45.995 --rc genhtml_legend=1 00:05:45.995 --rc geninfo_all_blocks=1 00:05:45.995 --rc geninfo_unexecuted_blocks=1 00:05:45.995 00:05:45.995 ' 00:05:45.995 17:26:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.995 17:26:27 -- nvmf/common.sh@7 -- # uname -s 00:05:45.995 17:26:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.995 17:26:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.995 17:26:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.995 17:26:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.995 17:26:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.995 17:26:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.995 17:26:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.995 17:26:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.995 17:26:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.995 17:26:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.995 17:26:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:05:45.995 17:26:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:05:45.995 17:26:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.995 17:26:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.995 17:26:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:45.995 17:26:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.995 17:26:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.995 17:26:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.995 17:26:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.995 17:26:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.995 17:26:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.995 17:26:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.995 17:26:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.995 17:26:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.995 17:26:27 -- paths/export.sh@5 -- # export PATH 00:05:45.995 17:26:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.995 17:26:27 -- nvmf/common.sh@51 -- # : 0 00:05:45.995 17:26:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.995 17:26:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.995 17:26:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.995 17:26:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.995 17:26:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.995 17:26:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.995 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.995 17:26:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.995 17:26:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.995 17:26:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.995 17:26:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:45.995 17:26:27 -- spdk/autotest.sh@32 -- # uname -s 00:05:45.995 17:26:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:45.995 17:26:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:45.995 17:26:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:45.995 17:26:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:45.995 17:26:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:45.995 17:26:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:46.253 17:26:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:46.253 17:26:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:46.253 17:26:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:46.253 17:26:27 -- spdk/autotest.sh@48 -- # udevadm_pid=69135 00:05:46.253 17:26:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:46.253 17:26:27 -- pm/common@17 -- # local monitor 00:05:46.253 17:26:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:46.253 17:26:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:46.253 17:26:27 -- pm/common@25 -- # sleep 1 00:05:46.253 17:26:27 -- pm/common@21 -- # date +%s 00:05:46.253 17:26:27 -- pm/common@21 -- # date +%s 00:05:46.253 17:26:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732123587 00:05:46.253 17:26:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732123587 00:05:46.253 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732123587_collect-cpu-load.pm.log 00:05:46.253 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732123587_collect-vmstat.pm.log 00:05:47.189 17:26:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:47.189 17:26:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:47.189 17:26:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.189 17:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:47.189 17:26:28 -- spdk/autotest.sh@59 -- # create_test_list 00:05:47.189 17:26:28 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:47.189 17:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:47.189 17:26:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:47.189 17:26:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:47.189 17:26:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:47.189 17:26:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:47.189 17:26:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:47.189 17:26:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:47.189 17:26:28 -- common/autotest_common.sh@1455 -- # uname 00:05:47.189 17:26:28 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:47.189 17:26:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:47.189 17:26:28 -- common/autotest_common.sh@1475 -- # uname 00:05:47.189 17:26:28 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:47.189 17:26:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:47.189 17:26:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:47.448 lcov: LCOV version 1.15 00:05:47.448 17:26:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:02.326 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:02.326 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:17.205 17:26:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:17.205 17:26:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.205 17:26:56 -- common/autotest_common.sh@10 -- # set +x 00:06:17.205 17:26:56 -- spdk/autotest.sh@78 -- # rm -f 00:06:17.205 17:26:56 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.205 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:17.205 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:17.205 17:26:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:17.205 17:26:57 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:17.205 17:26:57 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:17.205 17:26:57 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:17.205 17:26:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.205 17:26:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:17.205 17:26:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:17.205 17:26:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:17.205 17:26:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.205 17:26:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.205 17:26:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:06:17.205 17:26:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:06:17.205 17:26:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:17.205 17:26:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.205 17:26:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.205 17:26:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:06:17.205 17:26:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:06:17.205 17:26:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:17.205 17:26:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.205 17:26:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.205 17:26:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:17.205 17:26:57 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:17.205 17:26:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:17.205 17:26:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.205 17:26:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:17.205 17:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:17.205 17:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:17.205 17:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:17.205 17:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:17.205 17:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:17.205 No valid GPT data, bailing 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # pt= 00:06:17.205 17:26:57 -- scripts/common.sh@395 -- # return 1 00:06:17.205 17:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:17.205 1+0 records in 00:06:17.205 1+0 records out 00:06:17.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047324 s, 222 MB/s 00:06:17.205 17:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:17.205 17:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:17.205 17:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:06:17.205 17:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:06:17.205 17:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:06:17.205 No valid GPT data, bailing 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # pt= 00:06:17.205 17:26:57 -- scripts/common.sh@395 -- # return 1 00:06:17.205 17:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:06:17.205 1+0 records in 00:06:17.205 1+0 records out 00:06:17.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516599 s, 203 MB/s 00:06:17.205 17:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:17.205 17:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:17.205 17:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:06:17.205 17:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:06:17.205 17:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:06:17.205 No valid GPT data, bailing 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # pt= 00:06:17.205 17:26:57 -- scripts/common.sh@395 -- # return 1 00:06:17.205 17:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:06:17.205 1+0 records in 00:06:17.205 1+0 records out 00:06:17.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490423 s, 214 MB/s 00:06:17.205 17:26:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:17.205 17:26:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:17.205 17:26:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:17.205 17:26:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:17.205 17:26:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:17.205 No valid GPT data, bailing 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:17.205 17:26:57 -- scripts/common.sh@394 -- # pt= 00:06:17.205 17:26:57 -- scripts/common.sh@395 -- # return 1 00:06:17.205 17:26:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:17.205 1+0 records in 00:06:17.205 1+0 records out 00:06:17.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369101 s, 284 MB/s 00:06:17.205 17:26:57 -- spdk/autotest.sh@105 -- # sync 00:06:17.205 17:26:58 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:17.205 17:26:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:17.205 17:26:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:18.581 17:27:00 -- spdk/autotest.sh@111 -- # uname -s 00:06:18.581 17:27:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:18.581 17:27:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:18.581 17:27:00 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:19.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:19.148 Hugepages 00:06:19.148 node hugesize free / total 00:06:19.148 node0 1048576kB 0 / 0 00:06:19.148 node0 2048kB 0 / 0 00:06:19.148 00:06:19.148 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:19.148 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:19.415 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:19.415 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:19.415 17:27:01 -- spdk/autotest.sh@117 -- # uname -s 00:06:19.415 17:27:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:19.415 17:27:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:19.415 17:27:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:20.018 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.301 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.301 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.301 17:27:01 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:21.250 17:27:02 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:21.250 17:27:02 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:21.250 17:27:02 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:21.250 17:27:02 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:21.250 17:27:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:21.250 17:27:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:21.250 17:27:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:21.250 17:27:03 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:21.250 17:27:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:21.507 17:27:03 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:21.507 17:27:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:21.507 17:27:03 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:21.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:21.765 Waiting for block devices as requested 00:06:21.765 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.023 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.023 17:27:03 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:22.023 17:27:03 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:22.023 17:27:03 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:22.023 17:27:03 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:22.023 17:27:03 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:22.023 17:27:03 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1541 -- # continue 00:06:22.023 17:27:03 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:22.023 17:27:03 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:22.023 17:27:03 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:22.023 17:27:03 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:22.023 17:27:03 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:22.023 17:27:03 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:22.023 17:27:03 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:22.023 17:27:03 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:22.023 17:27:03 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:22.023 17:27:03 -- common/autotest_common.sh@1541 -- # continue 00:06:22.023 17:27:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:22.023 17:27:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.023 17:27:03 -- common/autotest_common.sh@10 -- # set +x 00:06:22.023 17:27:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:22.023 17:27:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.023 17:27:03 -- common/autotest_common.sh@10 -- # set +x 00:06:22.023 17:27:03 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:22.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.960 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.960 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.960 17:27:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:22.960 17:27:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.960 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.960 17:27:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:22.960 17:27:04 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:22.960 17:27:04 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:22.960 17:27:04 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:22.960 17:27:04 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:22.960 17:27:04 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:22.960 17:27:04 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:22.960 17:27:04 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:22.960 17:27:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:22.960 17:27:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:22.960 17:27:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:23.218 17:27:04 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:23.219 17:27:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:23.219 17:27:04 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:23.219 17:27:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:23.219 17:27:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:23.219 17:27:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:23.219 17:27:04 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:23.219 17:27:04 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:23.219 17:27:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:23.219 17:27:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:23.219 17:27:04 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:23.219 17:27:04 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:23.219 17:27:04 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:23.219 17:27:04 -- common/autotest_common.sh@1570 -- # return 0 00:06:23.219 17:27:04 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:23.219 17:27:04 -- common/autotest_common.sh@1578 -- # return 0 00:06:23.219 17:27:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:23.219 17:27:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:23.219 17:27:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:23.219 17:27:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:23.219 17:27:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:23.219 17:27:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.219 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:06:23.219 17:27:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:23.219 17:27:04 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:23.219 17:27:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.219 17:27:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.219 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:06:23.219 ************************************ 00:06:23.219 START TEST env 00:06:23.219 ************************************ 00:06:23.219 17:27:04 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:23.219 * Looking for test storage... 00:06:23.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:23.219 17:27:04 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.219 17:27:04 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.219 17:27:04 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.477 17:27:05 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.477 17:27:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.477 17:27:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.477 17:27:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.477 17:27:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.477 17:27:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.477 17:27:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.477 17:27:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.477 17:27:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.477 17:27:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.477 17:27:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.477 17:27:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.477 17:27:05 env -- scripts/common.sh@344 -- # case "$op" in 00:06:23.477 17:27:05 env -- scripts/common.sh@345 -- # : 1 00:06:23.478 17:27:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.478 17:27:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.478 17:27:05 env -- scripts/common.sh@365 -- # decimal 1 00:06:23.478 17:27:05 env -- scripts/common.sh@353 -- # local d=1 00:06:23.478 17:27:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.478 17:27:05 env -- scripts/common.sh@355 -- # echo 1 00:06:23.478 17:27:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.478 17:27:05 env -- scripts/common.sh@366 -- # decimal 2 00:06:23.478 17:27:05 env -- scripts/common.sh@353 -- # local d=2 00:06:23.478 17:27:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.478 17:27:05 env -- scripts/common.sh@355 -- # echo 2 00:06:23.478 17:27:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.478 17:27:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.478 17:27:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.478 17:27:05 env -- scripts/common.sh@368 -- # return 0 00:06:23.478 17:27:05 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.478 17:27:05 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.478 --rc genhtml_branch_coverage=1 00:06:23.478 --rc genhtml_function_coverage=1 00:06:23.478 --rc genhtml_legend=1 00:06:23.478 --rc geninfo_all_blocks=1 00:06:23.478 --rc geninfo_unexecuted_blocks=1 00:06:23.478 00:06:23.478 ' 00:06:23.478 17:27:05 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.478 --rc genhtml_branch_coverage=1 00:06:23.478 --rc genhtml_function_coverage=1 00:06:23.478 --rc genhtml_legend=1 00:06:23.478 --rc geninfo_all_blocks=1 00:06:23.478 --rc geninfo_unexecuted_blocks=1 00:06:23.478 00:06:23.478 ' 00:06:23.478 17:27:05 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.478 --rc genhtml_branch_coverage=1 00:06:23.478 --rc genhtml_function_coverage=1 00:06:23.478 --rc genhtml_legend=1 00:06:23.478 --rc geninfo_all_blocks=1 00:06:23.478 --rc geninfo_unexecuted_blocks=1 00:06:23.478 00:06:23.478 ' 00:06:23.478 17:27:05 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.478 --rc genhtml_branch_coverage=1 00:06:23.478 --rc genhtml_function_coverage=1 00:06:23.478 --rc genhtml_legend=1 00:06:23.478 --rc geninfo_all_blocks=1 00:06:23.478 --rc geninfo_unexecuted_blocks=1 00:06:23.478 00:06:23.478 ' 00:06:23.478 17:27:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:23.478 17:27:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.478 17:27:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.478 17:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.478 ************************************ 00:06:23.478 START TEST env_memory 00:06:23.478 ************************************ 00:06:23.478 17:27:05 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:23.478 00:06:23.478 00:06:23.478 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.478 http://cunit.sourceforge.net/ 00:06:23.478 00:06:23.478 00:06:23.478 Suite: memory 00:06:23.478 Test: alloc and free memory map ...[2024-11-20 17:27:05.099540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:23.478 passed 00:06:23.478 Test: mem map translation ...[2024-11-20 17:27:05.130496] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:23.478 [2024-11-20 17:27:05.130536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:23.478 [2024-11-20 17:27:05.130592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:23.478 [2024-11-20 17:27:05.130603] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:23.478 passed 00:06:23.478 Test: mem map registration ...[2024-11-20 17:27:05.194268] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:23.478 [2024-11-20 17:27:05.194302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:23.478 passed 00:06:23.738 Test: mem map adjacent registrations ...passed 00:06:23.738 00:06:23.738 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.738 suites 1 1 n/a 0 0 00:06:23.738 tests 4 4 4 0 0 00:06:23.738 asserts 152 152 152 0 n/a 00:06:23.738 00:06:23.738 Elapsed time = 0.213 seconds 00:06:23.738 00:06:23.738 real 0m0.232s 00:06:23.738 user 0m0.215s 00:06:23.738 sys 0m0.013s 00:06:23.738 17:27:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.738 17:27:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:23.738 ************************************ 00:06:23.738 END TEST env_memory 00:06:23.738 ************************************ 00:06:23.738 17:27:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:23.738 17:27:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.738 17:27:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.738 17:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.738 ************************************ 00:06:23.738 START TEST env_vtophys 00:06:23.738 ************************************ 00:06:23.738 17:27:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:23.738 EAL: lib.eal log level changed from notice to debug 00:06:23.738 EAL: Detected lcore 0 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 1 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 2 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 3 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 4 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 5 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 6 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 7 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 8 as core 0 on socket 0 00:06:23.738 EAL: Detected lcore 9 as core 0 on socket 0 00:06:23.738 EAL: Maximum logical cores by configuration: 128 00:06:23.738 EAL: Detected CPU lcores: 10 00:06:23.738 EAL: Detected NUMA nodes: 1 00:06:23.738 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:23.738 EAL: Detected shared linkage of DPDK 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:23.738 EAL: Registered [vdev] bus. 00:06:23.738 EAL: bus.vdev log level changed from disabled to notice 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:23.738 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:23.738 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:23.738 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:23.738 EAL: No shared files mode enabled, IPC will be disabled 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: Selected IOVA mode 'PA' 00:06:23.738 EAL: Probing VFIO support... 00:06:23.738 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:23.738 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:23.738 EAL: Ask a virtual area of 0x2e000 bytes 00:06:23.738 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:23.738 EAL: Setting up physically contiguous memory... 00:06:23.738 EAL: Setting maximum number of open files to 524288 00:06:23.738 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:23.738 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:23.738 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.738 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:23.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.738 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.738 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:23.738 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:23.738 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.738 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:23.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.738 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.738 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:23.738 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:23.738 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.738 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:23.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.738 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.738 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:23.738 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:23.738 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.738 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:23.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.738 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.738 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:23.738 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:23.738 EAL: Hugepages will be freed exactly as allocated. 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: TSC frequency is ~2200000 KHz 00:06:23.738 EAL: Main lcore 0 is ready (tid=7f264fe56a00;cpuset=[0]) 00:06:23.738 EAL: Trying to obtain current memory policy. 00:06:23.738 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.738 EAL: Restoring previous memory policy: 0 00:06:23.738 EAL: request: mp_malloc_sync 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: Heap on socket 0 was expanded by 2MB 00:06:23.738 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:23.738 EAL: Mem event callback 'spdk:(nil)' registered 00:06:23.738 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:23.738 00:06:23.738 00:06:23.738 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.738 http://cunit.sourceforge.net/ 00:06:23.738 00:06:23.738 00:06:23.738 Suite: components_suite 00:06:23.738 Test: vtophys_malloc_test ...passed 00:06:23.738 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:23.738 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.738 EAL: Restoring previous memory policy: 4 00:06:23.738 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.738 EAL: request: mp_malloc_sync 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: Heap on socket 0 was expanded by 4MB 00:06:23.738 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.738 EAL: request: mp_malloc_sync 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: Heap on socket 0 was shrunk by 4MB 00:06:23.738 EAL: Trying to obtain current memory policy. 00:06:23.738 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.738 EAL: Restoring previous memory policy: 4 00:06:23.738 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.738 EAL: request: mp_malloc_sync 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: Heap on socket 0 was expanded by 6MB 00:06:23.738 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.738 EAL: request: mp_malloc_sync 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: Heap on socket 0 was shrunk by 6MB 00:06:23.738 EAL: Trying to obtain current memory policy. 00:06:23.738 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.738 EAL: Restoring previous memory policy: 4 00:06:23.738 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.738 EAL: request: mp_malloc_sync 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.738 EAL: Heap on socket 0 was expanded by 10MB 00:06:23.738 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.738 EAL: request: mp_malloc_sync 00:06:23.738 EAL: No shared files mode enabled, IPC is disabled 00:06:23.739 EAL: Heap on socket 0 was shrunk by 10MB 00:06:23.739 EAL: Trying to obtain current memory policy. 00:06:23.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.739 EAL: Restoring previous memory policy: 4 00:06:23.739 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.739 EAL: request: mp_malloc_sync 00:06:23.739 EAL: No shared files mode enabled, IPC is disabled 00:06:23.739 EAL: Heap on socket 0 was expanded by 18MB 00:06:23.739 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.739 EAL: request: mp_malloc_sync 00:06:23.739 EAL: No shared files mode enabled, IPC is disabled 00:06:23.739 EAL: Heap on socket 0 was shrunk by 18MB 00:06:23.739 EAL: Trying to obtain current memory policy. 00:06:23.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.739 EAL: Restoring previous memory policy: 4 00:06:23.739 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.739 EAL: request: mp_malloc_sync 00:06:23.739 EAL: No shared files mode enabled, IPC is disabled 00:06:23.739 EAL: Heap on socket 0 was expanded by 34MB 00:06:23.997 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.997 EAL: request: mp_malloc_sync 00:06:23.997 EAL: No shared files mode enabled, IPC is disabled 00:06:23.997 EAL: Heap on socket 0 was shrunk by 34MB 00:06:23.997 EAL: Trying to obtain current memory policy. 00:06:23.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.997 EAL: Restoring previous memory policy: 4 00:06:23.997 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.997 EAL: request: mp_malloc_sync 00:06:23.997 EAL: No shared files mode enabled, IPC is disabled 00:06:23.997 EAL: Heap on socket 0 was expanded by 66MB 00:06:23.997 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.997 EAL: request: mp_malloc_sync 00:06:23.997 EAL: No shared files mode enabled, IPC is disabled 00:06:23.997 EAL: Heap on socket 0 was shrunk by 66MB 00:06:23.997 EAL: Trying to obtain current memory policy. 00:06:23.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.997 EAL: Restoring previous memory policy: 4 00:06:23.997 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.997 EAL: request: mp_malloc_sync 00:06:23.997 EAL: No shared files mode enabled, IPC is disabled 00:06:23.997 EAL: Heap on socket 0 was expanded by 130MB 00:06:23.997 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.997 EAL: request: mp_malloc_sync 00:06:23.997 EAL: No shared files mode enabled, IPC is disabled 00:06:23.997 EAL: Heap on socket 0 was shrunk by 130MB 00:06:23.998 EAL: Trying to obtain current memory policy. 00:06:23.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.998 EAL: Restoring previous memory policy: 4 00:06:23.998 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.998 EAL: request: mp_malloc_sync 00:06:23.998 EAL: No shared files mode enabled, IPC is disabled 00:06:23.998 EAL: Heap on socket 0 was expanded by 258MB 00:06:23.998 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.256 EAL: request: mp_malloc_sync 00:06:24.256 EAL: No shared files mode enabled, IPC is disabled 00:06:24.256 EAL: Heap on socket 0 was shrunk by 258MB 00:06:24.256 EAL: Trying to obtain current memory policy. 00:06:24.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.256 EAL: Restoring previous memory policy: 4 00:06:24.256 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.256 EAL: request: mp_malloc_sync 00:06:24.256 EAL: No shared files mode enabled, IPC is disabled 00:06:24.256 EAL: Heap on socket 0 was expanded by 514MB 00:06:24.256 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.515 EAL: request: mp_malloc_sync 00:06:24.515 EAL: No shared files mode enabled, IPC is disabled 00:06:24.515 EAL: Heap on socket 0 was shrunk by 514MB 00:06:24.515 EAL: Trying to obtain current memory policy. 00:06:24.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.773 EAL: Restoring previous memory policy: 4 00:06:24.773 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.773 EAL: request: mp_malloc_sync 00:06:24.773 EAL: No shared files mode enabled, IPC is disabled 00:06:24.773 EAL: Heap on socket 0 was expanded by 1026MB 00:06:25.031 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.031 passed 00:06:25.031 00:06:25.031 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.031 suites 1 1 n/a 0 0 00:06:25.031 tests 2 2 2 0 0 00:06:25.031 asserts 5358 5358 5358 0 n/a 00:06:25.031 00:06:25.031 Elapsed time = 1.228 seconds 00:06:25.031 EAL: request: mp_malloc_sync 00:06:25.031 EAL: No shared files mode enabled, IPC is disabled 00:06:25.031 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:25.031 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.031 EAL: request: mp_malloc_sync 00:06:25.031 EAL: No shared files mode enabled, IPC is disabled 00:06:25.031 EAL: Heap on socket 0 was shrunk by 2MB 00:06:25.031 EAL: No shared files mode enabled, IPC is disabled 00:06:25.031 EAL: No shared files mode enabled, IPC is disabled 00:06:25.031 EAL: No shared files mode enabled, IPC is disabled 00:06:25.031 00:06:25.031 real 0m1.427s 00:06:25.031 user 0m0.788s 00:06:25.031 sys 0m0.506s 00:06:25.031 17:27:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.031 17:27:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:25.031 ************************************ 00:06:25.031 END TEST env_vtophys 00:06:25.031 ************************************ 00:06:25.031 17:27:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:25.031 17:27:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.031 17:27:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.031 17:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.296 ************************************ 00:06:25.296 START TEST env_pci 00:06:25.296 ************************************ 00:06:25.296 17:27:06 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:25.296 00:06:25.296 00:06:25.296 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.296 http://cunit.sourceforge.net/ 00:06:25.296 00:06:25.296 00:06:25.296 Suite: pci 00:06:25.297 Test: pci_hook ...[2024-11-20 17:27:06.832287] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 71349 has claimed it 00:06:25.297 passed 00:06:25.297 00:06:25.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.297 suites 1 1 n/a 0 0 00:06:25.297 tests 1 1 1 0 0 00:06:25.297 asserts 25 25 25 0 n/a 00:06:25.297 00:06:25.297 Elapsed time = 0.002 seconds 00:06:25.297 EAL: Cannot find device (10000:00:01.0) 00:06:25.297 EAL: Failed to attach device on primary process 00:06:25.297 00:06:25.297 real 0m0.021s 00:06:25.297 user 0m0.011s 00:06:25.297 sys 0m0.010s 00:06:25.297 17:27:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.297 ************************************ 00:06:25.297 END TEST env_pci 00:06:25.297 ************************************ 00:06:25.297 17:27:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:25.297 17:27:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:25.297 17:27:06 env -- env/env.sh@15 -- # uname 00:06:25.297 17:27:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:25.297 17:27:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:25.297 17:27:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.297 17:27:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:25.297 17:27:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.297 17:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.297 ************************************ 00:06:25.297 START TEST env_dpdk_post_init 00:06:25.297 ************************************ 00:06:25.297 17:27:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.297 EAL: Detected CPU lcores: 10 00:06:25.297 EAL: Detected NUMA nodes: 1 00:06:25.297 EAL: Detected shared linkage of DPDK 00:06:25.297 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.297 EAL: Selected IOVA mode 'PA' 00:06:25.297 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:25.297 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:25.297 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:25.297 Starting DPDK initialization... 00:06:25.297 Starting SPDK post initialization... 00:06:25.297 SPDK NVMe probe 00:06:25.297 Attaching to 0000:00:10.0 00:06:25.297 Attaching to 0000:00:11.0 00:06:25.297 Attached to 0000:00:10.0 00:06:25.297 Attached to 0000:00:11.0 00:06:25.297 Cleaning up... 00:06:25.557 00:06:25.557 real 0m0.184s 00:06:25.557 user 0m0.039s 00:06:25.557 sys 0m0.044s 00:06:25.557 ************************************ 00:06:25.557 END TEST env_dpdk_post_init 00:06:25.557 ************************************ 00:06:25.557 17:27:07 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.557 17:27:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.557 17:27:07 env -- env/env.sh@26 -- # uname 00:06:25.557 17:27:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:25.557 17:27:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.557 17:27:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.557 17:27:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.557 17:27:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.557 ************************************ 00:06:25.557 START TEST env_mem_callbacks 00:06:25.557 ************************************ 00:06:25.557 17:27:07 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.557 EAL: Detected CPU lcores: 10 00:06:25.557 EAL: Detected NUMA nodes: 1 00:06:25.557 EAL: Detected shared linkage of DPDK 00:06:25.557 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.557 EAL: Selected IOVA mode 'PA' 00:06:25.557 00:06:25.557 00:06:25.557 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.557 http://cunit.sourceforge.net/ 00:06:25.557 00:06:25.557 00:06:25.557 Suite: memory 00:06:25.557 Test: test ... 00:06:25.557 register 0x200000200000 2097152 00:06:25.557 malloc 3145728 00:06:25.557 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:25.557 register 0x200000400000 4194304 00:06:25.557 buf 0x200000500000 len 3145728 PASSED 00:06:25.557 malloc 64 00:06:25.557 buf 0x2000004fff40 len 64 PASSED 00:06:25.557 malloc 4194304 00:06:25.557 register 0x200000800000 6291456 00:06:25.557 buf 0x200000a00000 len 4194304 PASSED 00:06:25.557 free 0x200000500000 3145728 00:06:25.557 free 0x2000004fff40 64 00:06:25.557 unregister 0x200000400000 4194304 PASSED 00:06:25.557 free 0x200000a00000 4194304 00:06:25.557 unregister 0x200000800000 6291456 PASSED 00:06:25.557 malloc 8388608 00:06:25.557 register 0x200000400000 10485760 00:06:25.557 buf 0x200000600000 len 8388608 PASSED 00:06:25.557 free 0x200000600000 8388608 00:06:25.557 unregister 0x200000400000 10485760 PASSED 00:06:25.557 passed 00:06:25.557 00:06:25.557 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.557 suites 1 1 n/a 0 0 00:06:25.557 tests 1 1 1 0 0 00:06:25.557 asserts 15 15 15 0 n/a 00:06:25.557 00:06:25.557 Elapsed time = 0.009 seconds 00:06:25.557 00:06:25.557 real 0m0.143s 00:06:25.557 user 0m0.022s 00:06:25.557 sys 0m0.020s 00:06:25.557 ************************************ 00:06:25.557 END TEST env_mem_callbacks 00:06:25.557 ************************************ 00:06:25.557 17:27:07 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.557 17:27:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 ************************************ 00:06:25.817 END TEST env 00:06:25.817 ************************************ 00:06:25.817 00:06:25.817 real 0m2.500s 00:06:25.817 user 0m1.284s 00:06:25.817 sys 0m0.852s 00:06:25.817 17:27:07 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.817 17:27:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 17:27:07 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:25.817 17:27:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.817 17:27:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.817 17:27:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 ************************************ 00:06:25.817 START TEST rpc 00:06:25.817 ************************************ 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:25.817 * Looking for test storage... 00:06:25.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.817 17:27:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.817 17:27:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.817 17:27:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.817 17:27:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.817 17:27:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.817 17:27:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.817 17:27:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.817 17:27:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:25.817 17:27:07 rpc -- scripts/common.sh@345 -- # : 1 00:06:25.817 17:27:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.817 17:27:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.817 17:27:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:25.817 17:27:07 rpc -- scripts/common.sh@353 -- # local d=1 00:06:25.817 17:27:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.817 17:27:07 rpc -- scripts/common.sh@355 -- # echo 1 00:06:25.817 17:27:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.817 17:27:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@353 -- # local d=2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.817 17:27:07 rpc -- scripts/common.sh@355 -- # echo 2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.817 17:27:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.817 17:27:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.817 17:27:07 rpc -- scripts/common.sh@368 -- # return 0 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.817 --rc genhtml_branch_coverage=1 00:06:25.817 --rc genhtml_function_coverage=1 00:06:25.817 --rc genhtml_legend=1 00:06:25.817 --rc geninfo_all_blocks=1 00:06:25.817 --rc geninfo_unexecuted_blocks=1 00:06:25.817 00:06:25.817 ' 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.817 --rc genhtml_branch_coverage=1 00:06:25.817 --rc genhtml_function_coverage=1 00:06:25.817 --rc genhtml_legend=1 00:06:25.817 --rc geninfo_all_blocks=1 00:06:25.817 --rc geninfo_unexecuted_blocks=1 00:06:25.817 00:06:25.817 ' 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.817 --rc genhtml_branch_coverage=1 00:06:25.817 --rc genhtml_function_coverage=1 00:06:25.817 --rc genhtml_legend=1 00:06:25.817 --rc geninfo_all_blocks=1 00:06:25.817 --rc geninfo_unexecuted_blocks=1 00:06:25.817 00:06:25.817 ' 00:06:25.817 17:27:07 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.817 --rc genhtml_branch_coverage=1 00:06:25.817 --rc genhtml_function_coverage=1 00:06:25.817 --rc genhtml_legend=1 00:06:25.817 --rc geninfo_all_blocks=1 00:06:25.817 --rc geninfo_unexecuted_blocks=1 00:06:25.817 00:06:25.817 ' 00:06:26.076 17:27:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71466 00:06:26.076 17:27:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:26.076 17:27:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.076 17:27:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71466 00:06:26.076 17:27:07 rpc -- common/autotest_common.sh@831 -- # '[' -z 71466 ']' 00:06:26.076 17:27:07 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.076 17:27:07 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.076 17:27:07 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.076 17:27:07 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.076 17:27:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.076 [2024-11-20 17:27:07.687201] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:26.076 [2024-11-20 17:27:07.687773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71466 ] 00:06:26.076 [2024-11-20 17:27:07.823689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.335 [2024-11-20 17:27:07.921487] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:26.335 [2024-11-20 17:27:07.921901] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71466' to capture a snapshot of events at runtime. 00:06:26.335 [2024-11-20 17:27:07.922279] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.335 [2024-11-20 17:27:07.922322] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.335 [2024-11-20 17:27:07.922340] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71466 for offline analysis/debug. 00:06:26.335 [2024-11-20 17:27:07.922386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.271 17:27:08 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.271 17:27:08 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.271 17:27:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:27.271 17:27:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:27.271 17:27:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:27.271 17:27:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:27.271 17:27:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.271 17:27:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.271 17:27:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.271 ************************************ 00:06:27.271 START TEST rpc_integrity 00:06:27.271 ************************************ 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:27.271 { 00:06:27.271 "aliases": [ 00:06:27.271 "291ee113-dbb2-45cb-a54d-2df9a9207c1e" 00:06:27.271 ], 00:06:27.271 "assigned_rate_limits": { 00:06:27.271 "r_mbytes_per_sec": 0, 00:06:27.271 "rw_ios_per_sec": 0, 00:06:27.271 "rw_mbytes_per_sec": 0, 00:06:27.271 "w_mbytes_per_sec": 0 00:06:27.271 }, 00:06:27.271 "block_size": 512, 00:06:27.271 "claimed": false, 00:06:27.271 "driver_specific": {}, 00:06:27.271 "memory_domains": [ 00:06:27.271 { 00:06:27.271 "dma_device_id": "system", 00:06:27.271 "dma_device_type": 1 00:06:27.271 }, 00:06:27.271 { 00:06:27.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.271 "dma_device_type": 2 00:06:27.271 } 00:06:27.271 ], 00:06:27.271 "name": "Malloc0", 00:06:27.271 "num_blocks": 16384, 00:06:27.271 "product_name": "Malloc disk", 00:06:27.271 "supported_io_types": { 00:06:27.271 "abort": true, 00:06:27.271 "compare": false, 00:06:27.271 "compare_and_write": false, 00:06:27.271 "copy": true, 00:06:27.271 "flush": true, 00:06:27.271 "get_zone_info": false, 00:06:27.271 "nvme_admin": false, 00:06:27.271 "nvme_io": false, 00:06:27.271 "nvme_io_md": false, 00:06:27.271 "nvme_iov_md": false, 00:06:27.271 "read": true, 00:06:27.271 "reset": true, 00:06:27.271 "seek_data": false, 00:06:27.271 "seek_hole": false, 00:06:27.271 "unmap": true, 00:06:27.271 "write": true, 00:06:27.271 "write_zeroes": true, 00:06:27.271 "zcopy": true, 00:06:27.271 "zone_append": false, 00:06:27.271 "zone_management": false 00:06:27.271 }, 00:06:27.271 "uuid": "291ee113-dbb2-45cb-a54d-2df9a9207c1e", 00:06:27.271 "zoned": false 00:06:27.271 } 00:06:27.271 ]' 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:27.271 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.271 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.271 [2024-11-20 17:27:08.872031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:27.271 [2024-11-20 17:27:08.872100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.271 [2024-11-20 17:27:08.872122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106ca80 00:06:27.271 [2024-11-20 17:27:08.872133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.272 [2024-11-20 17:27:08.873406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.272 [2024-11-20 17:27:08.873444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:27.272 Passthru0 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:27.272 { 00:06:27.272 "aliases": [ 00:06:27.272 "291ee113-dbb2-45cb-a54d-2df9a9207c1e" 00:06:27.272 ], 00:06:27.272 "assigned_rate_limits": { 00:06:27.272 "r_mbytes_per_sec": 0, 00:06:27.272 "rw_ios_per_sec": 0, 00:06:27.272 "rw_mbytes_per_sec": 0, 00:06:27.272 "w_mbytes_per_sec": 0 00:06:27.272 }, 00:06:27.272 "block_size": 512, 00:06:27.272 "claim_type": "exclusive_write", 00:06:27.272 "claimed": true, 00:06:27.272 "driver_specific": {}, 00:06:27.272 "memory_domains": [ 00:06:27.272 { 00:06:27.272 "dma_device_id": "system", 00:06:27.272 "dma_device_type": 1 00:06:27.272 }, 00:06:27.272 { 00:06:27.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.272 "dma_device_type": 2 00:06:27.272 } 00:06:27.272 ], 00:06:27.272 "name": "Malloc0", 00:06:27.272 "num_blocks": 16384, 00:06:27.272 "product_name": "Malloc disk", 00:06:27.272 "supported_io_types": { 00:06:27.272 "abort": true, 00:06:27.272 "compare": false, 00:06:27.272 "compare_and_write": false, 00:06:27.272 "copy": true, 00:06:27.272 "flush": true, 00:06:27.272 "get_zone_info": false, 00:06:27.272 "nvme_admin": false, 00:06:27.272 "nvme_io": false, 00:06:27.272 "nvme_io_md": false, 00:06:27.272 "nvme_iov_md": false, 00:06:27.272 "read": true, 00:06:27.272 "reset": true, 00:06:27.272 "seek_data": false, 00:06:27.272 "seek_hole": false, 00:06:27.272 "unmap": true, 00:06:27.272 "write": true, 00:06:27.272 "write_zeroes": true, 00:06:27.272 "zcopy": true, 00:06:27.272 "zone_append": false, 00:06:27.272 "zone_management": false 00:06:27.272 }, 00:06:27.272 "uuid": "291ee113-dbb2-45cb-a54d-2df9a9207c1e", 00:06:27.272 "zoned": false 00:06:27.272 }, 00:06:27.272 { 00:06:27.272 "aliases": [ 00:06:27.272 "1afbf5ac-5f42-5244-a1e6-1fdb1e158247" 00:06:27.272 ], 00:06:27.272 "assigned_rate_limits": { 00:06:27.272 "r_mbytes_per_sec": 0, 00:06:27.272 "rw_ios_per_sec": 0, 00:06:27.272 "rw_mbytes_per_sec": 0, 00:06:27.272 "w_mbytes_per_sec": 0 00:06:27.272 }, 00:06:27.272 "block_size": 512, 00:06:27.272 "claimed": false, 00:06:27.272 "driver_specific": { 00:06:27.272 "passthru": { 00:06:27.272 "base_bdev_name": "Malloc0", 00:06:27.272 "name": "Passthru0" 00:06:27.272 } 00:06:27.272 }, 00:06:27.272 "memory_domains": [ 00:06:27.272 { 00:06:27.272 "dma_device_id": "system", 00:06:27.272 "dma_device_type": 1 00:06:27.272 }, 00:06:27.272 { 00:06:27.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.272 "dma_device_type": 2 00:06:27.272 } 00:06:27.272 ], 00:06:27.272 "name": "Passthru0", 00:06:27.272 "num_blocks": 16384, 00:06:27.272 "product_name": "passthru", 00:06:27.272 "supported_io_types": { 00:06:27.272 "abort": true, 00:06:27.272 "compare": false, 00:06:27.272 "compare_and_write": false, 00:06:27.272 "copy": true, 00:06:27.272 "flush": true, 00:06:27.272 "get_zone_info": false, 00:06:27.272 "nvme_admin": false, 00:06:27.272 "nvme_io": false, 00:06:27.272 "nvme_io_md": false, 00:06:27.272 "nvme_iov_md": false, 00:06:27.272 "read": true, 00:06:27.272 "reset": true, 00:06:27.272 "seek_data": false, 00:06:27.272 "seek_hole": false, 00:06:27.272 "unmap": true, 00:06:27.272 "write": true, 00:06:27.272 "write_zeroes": true, 00:06:27.272 "zcopy": true, 00:06:27.272 "zone_append": false, 00:06:27.272 "zone_management": false 00:06:27.272 }, 00:06:27.272 "uuid": "1afbf5ac-5f42-5244-a1e6-1fdb1e158247", 00:06:27.272 "zoned": false 00:06:27.272 } 00:06:27.272 ]' 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.272 17:27:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:27.272 17:27:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:27.272 ************************************ 00:06:27.272 END TEST rpc_integrity 00:06:27.272 ************************************ 00:06:27.272 17:27:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:27.272 00:06:27.272 real 0m0.302s 00:06:27.272 user 0m0.196s 00:06:27.272 sys 0m0.036s 00:06:27.272 17:27:09 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.272 17:27:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.547 17:27:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:27.547 17:27:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.547 17:27:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.547 17:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.547 ************************************ 00:06:27.547 START TEST rpc_plugins 00:06:27.547 ************************************ 00:06:27.547 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:27.547 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:27.547 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.547 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:27.548 { 00:06:27.548 "aliases": [ 00:06:27.548 "0967cd75-e476-406b-a894-ab4afdf25a73" 00:06:27.548 ], 00:06:27.548 "assigned_rate_limits": { 00:06:27.548 "r_mbytes_per_sec": 0, 00:06:27.548 "rw_ios_per_sec": 0, 00:06:27.548 "rw_mbytes_per_sec": 0, 00:06:27.548 "w_mbytes_per_sec": 0 00:06:27.548 }, 00:06:27.548 "block_size": 4096, 00:06:27.548 "claimed": false, 00:06:27.548 "driver_specific": {}, 00:06:27.548 "memory_domains": [ 00:06:27.548 { 00:06:27.548 "dma_device_id": "system", 00:06:27.548 "dma_device_type": 1 00:06:27.548 }, 00:06:27.548 { 00:06:27.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.548 "dma_device_type": 2 00:06:27.548 } 00:06:27.548 ], 00:06:27.548 "name": "Malloc1", 00:06:27.548 "num_blocks": 256, 00:06:27.548 "product_name": "Malloc disk", 00:06:27.548 "supported_io_types": { 00:06:27.548 "abort": true, 00:06:27.548 "compare": false, 00:06:27.548 "compare_and_write": false, 00:06:27.548 "copy": true, 00:06:27.548 "flush": true, 00:06:27.548 "get_zone_info": false, 00:06:27.548 "nvme_admin": false, 00:06:27.548 "nvme_io": false, 00:06:27.548 "nvme_io_md": false, 00:06:27.548 "nvme_iov_md": false, 00:06:27.548 "read": true, 00:06:27.548 "reset": true, 00:06:27.548 "seek_data": false, 00:06:27.548 "seek_hole": false, 00:06:27.548 "unmap": true, 00:06:27.548 "write": true, 00:06:27.548 "write_zeroes": true, 00:06:27.548 "zcopy": true, 00:06:27.548 "zone_append": false, 00:06:27.548 "zone_management": false 00:06:27.548 }, 00:06:27.548 "uuid": "0967cd75-e476-406b-a894-ab4afdf25a73", 00:06:27.548 "zoned": false 00:06:27.548 } 00:06:27.548 ]' 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:27.548 ************************************ 00:06:27.548 END TEST rpc_plugins 00:06:27.548 ************************************ 00:06:27.548 17:27:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:27.548 00:06:27.548 real 0m0.169s 00:06:27.548 user 0m0.108s 00:06:27.548 sys 0m0.022s 00:06:27.548 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.549 17:27:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.549 17:27:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:27.549 17:27:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.549 17:27:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.549 17:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.549 ************************************ 00:06:27.549 START TEST rpc_trace_cmd_test 00:06:27.549 ************************************ 00:06:27.549 17:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:27.549 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:27.549 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:27.549 17:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.549 17:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.549 17:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.549 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:27.549 "bdev": { 00:06:27.549 "mask": "0x8", 00:06:27.549 "tpoint_mask": "0xffffffffffffffff" 00:06:27.549 }, 00:06:27.549 "bdev_nvme": { 00:06:27.549 "mask": "0x4000", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "bdev_raid": { 00:06:27.549 "mask": "0x20000", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "blob": { 00:06:27.549 "mask": "0x10000", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "blobfs": { 00:06:27.549 "mask": "0x80", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "dsa": { 00:06:27.549 "mask": "0x200", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "ftl": { 00:06:27.549 "mask": "0x40", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "iaa": { 00:06:27.549 "mask": "0x1000", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "iscsi_conn": { 00:06:27.549 "mask": "0x2", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "nvme_pcie": { 00:06:27.549 "mask": "0x800", 00:06:27.549 "tpoint_mask": "0x0" 00:06:27.549 }, 00:06:27.549 "nvme_tcp": { 00:06:27.550 "mask": "0x2000", 00:06:27.550 "tpoint_mask": "0x0" 00:06:27.550 }, 00:06:27.550 "nvmf_rdma": { 00:06:27.550 "mask": "0x10", 00:06:27.550 "tpoint_mask": "0x0" 00:06:27.550 }, 00:06:27.550 "nvmf_tcp": { 00:06:27.550 "mask": "0x20", 00:06:27.550 "tpoint_mask": "0x0" 00:06:27.550 }, 00:06:27.550 "scsi": { 00:06:27.550 "mask": "0x4", 00:06:27.550 "tpoint_mask": "0x0" 00:06:27.550 }, 00:06:27.550 "sock": { 00:06:27.550 "mask": "0x8000", 00:06:27.550 "tpoint_mask": "0x0" 00:06:27.550 }, 00:06:27.550 "thread": { 00:06:27.550 "mask": "0x400", 00:06:27.550 "tpoint_mask": "0x0" 00:06:27.550 }, 00:06:27.550 "tpoint_group_mask": "0x8", 00:06:27.550 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71466" 00:06:27.550 }' 00:06:27.550 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:27.813 ************************************ 00:06:27.813 END TEST rpc_trace_cmd_test 00:06:27.813 ************************************ 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:27.813 00:06:27.813 real 0m0.245s 00:06:27.813 user 0m0.209s 00:06:27.813 sys 0m0.027s 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.813 17:27:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.813 17:27:09 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:06:27.813 17:27:09 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:06:27.813 17:27:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.813 17:27:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.814 17:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.814 ************************************ 00:06:27.814 START TEST go_rpc 00:06:27.814 ************************************ 00:06:27.814 17:27:09 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:06:27.814 17:27:09 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:28.072 17:27:09 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:06:28.072 17:27:09 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:06:28.072 17:27:09 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:06:28.072 17:27:09 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:06:28.072 17:27:09 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.072 17:27:09 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 17:27:09 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.072 17:27:09 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:06:28.072 17:27:09 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["726eb376-eda9-4ba5-8123-6c89a394a012"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"726eb376-eda9-4ba5-8123-6c89a394a012","zoned":false}]' 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:28.073 17:27:09 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.073 17:27:09 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.073 17:27:09 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:06:28.073 ************************************ 00:06:28.073 END TEST go_rpc 00:06:28.073 ************************************ 00:06:28.073 17:27:09 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:06:28.073 00:06:28.073 real 0m0.203s 00:06:28.073 user 0m0.134s 00:06:28.073 sys 0m0.038s 00:06:28.073 17:27:09 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.073 17:27:09 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.073 17:27:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:28.073 17:27:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:28.073 17:27:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.073 17:27:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.073 17:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.073 ************************************ 00:06:28.073 START TEST rpc_daemon_integrity 00:06:28.073 ************************************ 00:06:28.073 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:28.073 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:28.073 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.073 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:28.332 { 00:06:28.332 "aliases": [ 00:06:28.332 "0e907439-9720-4851-a6f3-cf612dc80dfc" 00:06:28.332 ], 00:06:28.332 "assigned_rate_limits": { 00:06:28.332 "r_mbytes_per_sec": 0, 00:06:28.332 "rw_ios_per_sec": 0, 00:06:28.332 "rw_mbytes_per_sec": 0, 00:06:28.332 "w_mbytes_per_sec": 0 00:06:28.332 }, 00:06:28.332 "block_size": 512, 00:06:28.332 "claimed": false, 00:06:28.332 "driver_specific": {}, 00:06:28.332 "memory_domains": [ 00:06:28.332 { 00:06:28.332 "dma_device_id": "system", 00:06:28.332 "dma_device_type": 1 00:06:28.332 }, 00:06:28.332 { 00:06:28.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.332 "dma_device_type": 2 00:06:28.332 } 00:06:28.332 ], 00:06:28.332 "name": "Malloc3", 00:06:28.332 "num_blocks": 16384, 00:06:28.332 "product_name": "Malloc disk", 00:06:28.332 "supported_io_types": { 00:06:28.332 "abort": true, 00:06:28.332 "compare": false, 00:06:28.332 "compare_and_write": false, 00:06:28.332 "copy": true, 00:06:28.332 "flush": true, 00:06:28.332 "get_zone_info": false, 00:06:28.332 "nvme_admin": false, 00:06:28.332 "nvme_io": false, 00:06:28.332 "nvme_io_md": false, 00:06:28.332 "nvme_iov_md": false, 00:06:28.332 "read": true, 00:06:28.332 "reset": true, 00:06:28.332 "seek_data": false, 00:06:28.332 "seek_hole": false, 00:06:28.332 "unmap": true, 00:06:28.332 "write": true, 00:06:28.332 "write_zeroes": true, 00:06:28.332 "zcopy": true, 00:06:28.332 "zone_append": false, 00:06:28.332 "zone_management": false 00:06:28.332 }, 00:06:28.332 "uuid": "0e907439-9720-4851-a6f3-cf612dc80dfc", 00:06:28.332 "zoned": false 00:06:28.332 } 00:06:28.332 ]' 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.332 17:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.332 [2024-11-20 17:27:10.000394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:28.332 [2024-11-20 17:27:10.000557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.332 [2024-11-20 17:27:10.000581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10fd560 00:06:28.332 [2024-11-20 17:27:10.000593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.332 [2024-11-20 17:27:10.002003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.332 [2024-11-20 17:27:10.002048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:28.332 Passthru0 00:06:28.332 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.332 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:28.332 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.332 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.332 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.332 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:28.332 { 00:06:28.332 "aliases": [ 00:06:28.332 "0e907439-9720-4851-a6f3-cf612dc80dfc" 00:06:28.332 ], 00:06:28.332 "assigned_rate_limits": { 00:06:28.332 "r_mbytes_per_sec": 0, 00:06:28.332 "rw_ios_per_sec": 0, 00:06:28.332 "rw_mbytes_per_sec": 0, 00:06:28.332 "w_mbytes_per_sec": 0 00:06:28.332 }, 00:06:28.332 "block_size": 512, 00:06:28.332 "claim_type": "exclusive_write", 00:06:28.332 "claimed": true, 00:06:28.332 "driver_specific": {}, 00:06:28.332 "memory_domains": [ 00:06:28.332 { 00:06:28.332 "dma_device_id": "system", 00:06:28.332 "dma_device_type": 1 00:06:28.332 }, 00:06:28.332 { 00:06:28.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.332 "dma_device_type": 2 00:06:28.332 } 00:06:28.332 ], 00:06:28.332 "name": "Malloc3", 00:06:28.332 "num_blocks": 16384, 00:06:28.332 "product_name": "Malloc disk", 00:06:28.332 "supported_io_types": { 00:06:28.332 "abort": true, 00:06:28.332 "compare": false, 00:06:28.332 "compare_and_write": false, 00:06:28.332 "copy": true, 00:06:28.332 "flush": true, 00:06:28.332 "get_zone_info": false, 00:06:28.332 "nvme_admin": false, 00:06:28.332 "nvme_io": false, 00:06:28.332 "nvme_io_md": false, 00:06:28.332 "nvme_iov_md": false, 00:06:28.332 "read": true, 00:06:28.332 "reset": true, 00:06:28.332 "seek_data": false, 00:06:28.332 "seek_hole": false, 00:06:28.332 "unmap": true, 00:06:28.332 "write": true, 00:06:28.332 "write_zeroes": true, 00:06:28.332 "zcopy": true, 00:06:28.332 "zone_append": false, 00:06:28.332 "zone_management": false 00:06:28.332 }, 00:06:28.332 "uuid": "0e907439-9720-4851-a6f3-cf612dc80dfc", 00:06:28.332 "zoned": false 00:06:28.332 }, 00:06:28.332 { 00:06:28.332 "aliases": [ 00:06:28.333 "62f0fe30-5334-5518-bbe4-4e09535b8f07" 00:06:28.333 ], 00:06:28.333 "assigned_rate_limits": { 00:06:28.333 "r_mbytes_per_sec": 0, 00:06:28.333 "rw_ios_per_sec": 0, 00:06:28.333 "rw_mbytes_per_sec": 0, 00:06:28.333 "w_mbytes_per_sec": 0 00:06:28.333 }, 00:06:28.333 "block_size": 512, 00:06:28.333 "claimed": false, 00:06:28.333 "driver_specific": { 00:06:28.333 "passthru": { 00:06:28.333 "base_bdev_name": "Malloc3", 00:06:28.333 "name": "Passthru0" 00:06:28.333 } 00:06:28.333 }, 00:06:28.333 "memory_domains": [ 00:06:28.333 { 00:06:28.333 "dma_device_id": "system", 00:06:28.333 "dma_device_type": 1 00:06:28.333 }, 00:06:28.333 { 00:06:28.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.333 "dma_device_type": 2 00:06:28.333 } 00:06:28.333 ], 00:06:28.333 "name": "Passthru0", 00:06:28.333 "num_blocks": 16384, 00:06:28.333 "product_name": "passthru", 00:06:28.333 "supported_io_types": { 00:06:28.333 "abort": true, 00:06:28.333 "compare": false, 00:06:28.333 "compare_and_write": false, 00:06:28.333 "copy": true, 00:06:28.333 "flush": true, 00:06:28.333 "get_zone_info": false, 00:06:28.333 "nvme_admin": false, 00:06:28.333 "nvme_io": false, 00:06:28.333 "nvme_io_md": false, 00:06:28.333 "nvme_iov_md": false, 00:06:28.333 "read": true, 00:06:28.333 "reset": true, 00:06:28.333 "seek_data": false, 00:06:28.333 "seek_hole": false, 00:06:28.333 "unmap": true, 00:06:28.333 "write": true, 00:06:28.333 "write_zeroes": true, 00:06:28.333 "zcopy": true, 00:06:28.333 "zone_append": false, 00:06:28.333 "zone_management": false 00:06:28.333 }, 00:06:28.333 "uuid": "62f0fe30-5334-5518-bbe4-4e09535b8f07", 00:06:28.333 "zoned": false 00:06:28.333 } 00:06:28.333 ]' 00:06:28.333 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:28.333 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:28.333 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:28.333 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.333 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:28.592 ************************************ 00:06:28.592 END TEST rpc_daemon_integrity 00:06:28.592 ************************************ 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:28.592 00:06:28.592 real 0m0.342s 00:06:28.592 user 0m0.237s 00:06:28.592 sys 0m0.038s 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.592 17:27:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.592 17:27:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:28.592 17:27:10 rpc -- rpc/rpc.sh@84 -- # killprocess 71466 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@950 -- # '[' -z 71466 ']' 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@954 -- # kill -0 71466 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@955 -- # uname 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71466 00:06:28.592 killing process with pid 71466 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71466' 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@969 -- # kill 71466 00:06:28.592 17:27:10 rpc -- common/autotest_common.sh@974 -- # wait 71466 00:06:29.161 ************************************ 00:06:29.161 END TEST rpc 00:06:29.161 ************************************ 00:06:29.161 00:06:29.161 real 0m3.386s 00:06:29.161 user 0m4.296s 00:06:29.161 sys 0m0.876s 00:06:29.161 17:27:10 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.161 17:27:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.161 17:27:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:29.161 17:27:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.161 17:27:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.161 17:27:10 -- common/autotest_common.sh@10 -- # set +x 00:06:29.161 ************************************ 00:06:29.161 START TEST skip_rpc 00:06:29.161 ************************************ 00:06:29.161 17:27:10 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:29.161 * Looking for test storage... 00:06:29.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:29.161 17:27:10 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:29.161 17:27:10 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:29.161 17:27:10 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.420 17:27:11 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:29.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.420 --rc genhtml_branch_coverage=1 00:06:29.420 --rc genhtml_function_coverage=1 00:06:29.420 --rc genhtml_legend=1 00:06:29.420 --rc geninfo_all_blocks=1 00:06:29.420 --rc geninfo_unexecuted_blocks=1 00:06:29.420 00:06:29.420 ' 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:29.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.420 --rc genhtml_branch_coverage=1 00:06:29.420 --rc genhtml_function_coverage=1 00:06:29.420 --rc genhtml_legend=1 00:06:29.420 --rc geninfo_all_blocks=1 00:06:29.420 --rc geninfo_unexecuted_blocks=1 00:06:29.420 00:06:29.420 ' 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:29.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.420 --rc genhtml_branch_coverage=1 00:06:29.420 --rc genhtml_function_coverage=1 00:06:29.420 --rc genhtml_legend=1 00:06:29.420 --rc geninfo_all_blocks=1 00:06:29.420 --rc geninfo_unexecuted_blocks=1 00:06:29.420 00:06:29.420 ' 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:29.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.420 --rc genhtml_branch_coverage=1 00:06:29.420 --rc genhtml_function_coverage=1 00:06:29.420 --rc genhtml_legend=1 00:06:29.420 --rc geninfo_all_blocks=1 00:06:29.420 --rc geninfo_unexecuted_blocks=1 00:06:29.420 00:06:29.420 ' 00:06:29.420 17:27:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:29.420 17:27:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:29.420 17:27:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.420 17:27:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.420 ************************************ 00:06:29.420 START TEST skip_rpc 00:06:29.420 ************************************ 00:06:29.420 17:27:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:29.420 17:27:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71741 00:06:29.420 17:27:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:29.420 17:27:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.420 17:27:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:29.420 [2024-11-20 17:27:11.123673] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.420 [2024-11-20 17:27:11.123929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71741 ] 00:06:29.679 [2024-11-20 17:27:11.263687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.679 [2024-11-20 17:27:11.338676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.951 2024/11/20 17:27:16 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71741 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 71741 ']' 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 71741 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71741 00:06:34.951 killing process with pid 71741 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71741' 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 71741 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 71741 00:06:34.951 00:06:34.951 real 0m5.554s 00:06:34.951 user 0m5.076s 00:06:34.951 sys 0m0.387s 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.951 ************************************ 00:06:34.951 END TEST skip_rpc 00:06:34.951 ************************************ 00:06:34.951 17:27:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.951 17:27:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:34.951 17:27:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.951 17:27:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.951 17:27:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.951 ************************************ 00:06:34.951 START TEST skip_rpc_with_json 00:06:34.951 ************************************ 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71833 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71833 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 71833 ']' 00:06:34.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.951 17:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.951 [2024-11-20 17:27:16.729140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.951 [2024-11-20 17:27:16.729242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71833 ] 00:06:35.209 [2024-11-20 17:27:16.867942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.209 [2024-11-20 17:27:16.931992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.151 [2024-11-20 17:27:17.736263] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:36.151 2024/11/20 17:27:17 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:06:36.151 request: 00:06:36.151 { 00:06:36.151 "method": "nvmf_get_transports", 00:06:36.151 "params": { 00:06:36.151 "trtype": "tcp" 00:06:36.151 } 00:06:36.151 } 00:06:36.151 Got JSON-RPC error response 00:06:36.151 GoRPCClient: error on JSON-RPC call 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.151 [2024-11-20 17:27:17.748362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.151 17:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.151 { 00:06:36.151 "subsystems": [ 00:06:36.152 { 00:06:36.152 "subsystem": "fsdev", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "fsdev_set_opts", 00:06:36.152 "params": { 00:06:36.152 "fsdev_io_cache_size": 256, 00:06:36.152 "fsdev_io_pool_size": 65535 00:06:36.152 } 00:06:36.152 } 00:06:36.152 ] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "vfio_user_target", 00:06:36.152 "config": null 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "keyring", 00:06:36.152 "config": [] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "iobuf", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "iobuf_set_options", 00:06:36.152 "params": { 00:06:36.152 "large_bufsize": 135168, 00:06:36.152 "large_pool_count": 1024, 00:06:36.152 "small_bufsize": 8192, 00:06:36.152 "small_pool_count": 8192 00:06:36.152 } 00:06:36.152 } 00:06:36.152 ] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "sock", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "sock_set_default_impl", 00:06:36.152 "params": { 00:06:36.152 "impl_name": "posix" 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "sock_impl_set_options", 00:06:36.152 "params": { 00:06:36.152 "enable_ktls": false, 00:06:36.152 "enable_placement_id": 0, 00:06:36.152 "enable_quickack": false, 00:06:36.152 "enable_recv_pipe": true, 00:06:36.152 "enable_zerocopy_send_client": false, 00:06:36.152 "enable_zerocopy_send_server": true, 00:06:36.152 "impl_name": "ssl", 00:06:36.152 "recv_buf_size": 4096, 00:06:36.152 "send_buf_size": 4096, 00:06:36.152 "tls_version": 0, 00:06:36.152 "zerocopy_threshold": 0 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "sock_impl_set_options", 00:06:36.152 "params": { 00:06:36.152 "enable_ktls": false, 00:06:36.152 "enable_placement_id": 0, 00:06:36.152 "enable_quickack": false, 00:06:36.152 "enable_recv_pipe": true, 00:06:36.152 "enable_zerocopy_send_client": false, 00:06:36.152 "enable_zerocopy_send_server": true, 00:06:36.152 "impl_name": "posix", 00:06:36.152 "recv_buf_size": 2097152, 00:06:36.152 "send_buf_size": 2097152, 00:06:36.152 "tls_version": 0, 00:06:36.152 "zerocopy_threshold": 0 00:06:36.152 } 00:06:36.152 } 00:06:36.152 ] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "vmd", 00:06:36.152 "config": [] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "accel", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "accel_set_options", 00:06:36.152 "params": { 00:06:36.152 "buf_count": 2048, 00:06:36.152 "large_cache_size": 16, 00:06:36.152 "sequence_count": 2048, 00:06:36.152 "small_cache_size": 128, 00:06:36.152 "task_count": 2048 00:06:36.152 } 00:06:36.152 } 00:06:36.152 ] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "bdev", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "bdev_set_options", 00:06:36.152 "params": { 00:06:36.152 "bdev_auto_examine": true, 00:06:36.152 "bdev_io_cache_size": 256, 00:06:36.152 "bdev_io_pool_size": 65535, 00:06:36.152 "iobuf_large_cache_size": 16, 00:06:36.152 "iobuf_small_cache_size": 128 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "bdev_raid_set_options", 00:06:36.152 "params": { 00:06:36.152 "process_max_bandwidth_mb_sec": 0, 00:06:36.152 "process_window_size_kb": 1024 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "bdev_iscsi_set_options", 00:06:36.152 "params": { 00:06:36.152 "timeout_sec": 30 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "bdev_nvme_set_options", 00:06:36.152 "params": { 00:06:36.152 "action_on_timeout": "none", 00:06:36.152 "allow_accel_sequence": false, 00:06:36.152 "arbitration_burst": 0, 00:06:36.152 "bdev_retry_count": 3, 00:06:36.152 "ctrlr_loss_timeout_sec": 0, 00:06:36.152 "delay_cmd_submit": true, 00:06:36.152 "dhchap_dhgroups": [ 00:06:36.152 "null", 00:06:36.152 "ffdhe2048", 00:06:36.152 "ffdhe3072", 00:06:36.152 "ffdhe4096", 00:06:36.152 "ffdhe6144", 00:06:36.152 "ffdhe8192" 00:06:36.152 ], 00:06:36.152 "dhchap_digests": [ 00:06:36.152 "sha256", 00:06:36.152 "sha384", 00:06:36.152 "sha512" 00:06:36.152 ], 00:06:36.152 "disable_auto_failback": false, 00:06:36.152 "fast_io_fail_timeout_sec": 0, 00:06:36.152 "generate_uuids": false, 00:06:36.152 "high_priority_weight": 0, 00:06:36.152 "io_path_stat": false, 00:06:36.152 "io_queue_requests": 0, 00:06:36.152 "keep_alive_timeout_ms": 10000, 00:06:36.152 "low_priority_weight": 0, 00:06:36.152 "medium_priority_weight": 0, 00:06:36.152 "nvme_adminq_poll_period_us": 10000, 00:06:36.152 "nvme_error_stat": false, 00:06:36.152 "nvme_ioq_poll_period_us": 0, 00:06:36.152 "rdma_cm_event_timeout_ms": 0, 00:06:36.152 "rdma_max_cq_size": 0, 00:06:36.152 "rdma_srq_size": 0, 00:06:36.152 "reconnect_delay_sec": 0, 00:06:36.152 "timeout_admin_us": 0, 00:06:36.152 "timeout_us": 0, 00:06:36.152 "transport_ack_timeout": 0, 00:06:36.152 "transport_retry_count": 4, 00:06:36.152 "transport_tos": 0 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "bdev_nvme_set_hotplug", 00:06:36.152 "params": { 00:06:36.152 "enable": false, 00:06:36.152 "period_us": 100000 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "bdev_wait_for_examine" 00:06:36.152 } 00:06:36.152 ] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "scsi", 00:06:36.152 "config": null 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "scheduler", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "framework_set_scheduler", 00:06:36.152 "params": { 00:06:36.152 "name": "static" 00:06:36.152 } 00:06:36.152 } 00:06:36.152 ] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "vhost_scsi", 00:06:36.152 "config": [] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "vhost_blk", 00:06:36.152 "config": [] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "ublk", 00:06:36.152 "config": [] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "nbd", 00:06:36.152 "config": [] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "nvmf", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "nvmf_set_config", 00:06:36.152 "params": { 00:06:36.152 "admin_cmd_passthru": { 00:06:36.152 "identify_ctrlr": false 00:06:36.152 }, 00:06:36.152 "dhchap_dhgroups": [ 00:06:36.152 "null", 00:06:36.152 "ffdhe2048", 00:06:36.152 "ffdhe3072", 00:06:36.152 "ffdhe4096", 00:06:36.152 "ffdhe6144", 00:06:36.152 "ffdhe8192" 00:06:36.152 ], 00:06:36.152 "dhchap_digests": [ 00:06:36.152 "sha256", 00:06:36.152 "sha384", 00:06:36.152 "sha512" 00:06:36.152 ], 00:06:36.152 "discovery_filter": "match_any" 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "nvmf_set_max_subsystems", 00:06:36.152 "params": { 00:06:36.152 "max_subsystems": 1024 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "nvmf_set_crdt", 00:06:36.152 "params": { 00:06:36.152 "crdt1": 0, 00:06:36.152 "crdt2": 0, 00:06:36.152 "crdt3": 0 00:06:36.152 } 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "method": "nvmf_create_transport", 00:06:36.152 "params": { 00:06:36.152 "abort_timeout_sec": 1, 00:06:36.152 "ack_timeout": 0, 00:06:36.152 "buf_cache_size": 4294967295, 00:06:36.152 "c2h_success": true, 00:06:36.152 "data_wr_pool_size": 0, 00:06:36.152 "dif_insert_or_strip": false, 00:06:36.152 "in_capsule_data_size": 4096, 00:06:36.152 "io_unit_size": 131072, 00:06:36.152 "max_aq_depth": 128, 00:06:36.152 "max_io_qpairs_per_ctrlr": 127, 00:06:36.152 "max_io_size": 131072, 00:06:36.152 "max_queue_depth": 128, 00:06:36.152 "num_shared_buffers": 511, 00:06:36.152 "sock_priority": 0, 00:06:36.152 "trtype": "TCP", 00:06:36.152 "zcopy": false 00:06:36.152 } 00:06:36.152 } 00:06:36.152 ] 00:06:36.152 }, 00:06:36.152 { 00:06:36.152 "subsystem": "iscsi", 00:06:36.152 "config": [ 00:06:36.152 { 00:06:36.152 "method": "iscsi_set_options", 00:06:36.152 "params": { 00:06:36.152 "allow_duplicated_isid": false, 00:06:36.152 "chap_group": 0, 00:06:36.152 "data_out_pool_size": 2048, 00:06:36.152 "default_time2retain": 20, 00:06:36.152 "default_time2wait": 2, 00:06:36.152 "disable_chap": false, 00:06:36.152 "error_recovery_level": 0, 00:06:36.152 "first_burst_length": 8192, 00:06:36.152 "immediate_data": true, 00:06:36.152 "immediate_data_pool_size": 16384, 00:06:36.153 "max_connections_per_session": 2, 00:06:36.153 "max_large_datain_per_connection": 64, 00:06:36.153 "max_queue_depth": 64, 00:06:36.153 "max_r2t_per_connection": 4, 00:06:36.153 "max_sessions": 128, 00:06:36.153 "mutual_chap": false, 00:06:36.153 "node_base": "iqn.2016-06.io.spdk", 00:06:36.153 "nop_in_interval": 30, 00:06:36.153 "nop_timeout": 60, 00:06:36.153 "pdu_pool_size": 36864, 00:06:36.153 "require_chap": false 00:06:36.153 } 00:06:36.153 } 00:06:36.153 ] 00:06:36.153 } 00:06:36.153 ] 00:06:36.153 } 00:06:36.153 17:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:36.153 17:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71833 00:06:36.153 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71833 ']' 00:06:36.153 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71833 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71833 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.412 killing process with pid 71833 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71833' 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71833 00:06:36.412 17:27:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71833 00:06:36.980 17:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71878 00:06:36.980 17:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:36.980 17:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71878 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71878 ']' 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71878 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71878 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.253 killing process with pid 71878 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71878' 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71878 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71878 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:42.253 00:06:42.253 real 0m7.288s 00:06:42.253 user 0m6.936s 00:06:42.253 sys 0m0.797s 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.253 ************************************ 00:06:42.253 END TEST skip_rpc_with_json 00:06:42.253 ************************************ 00:06:42.253 17:27:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:42.253 17:27:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.253 17:27:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.253 17:27:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.253 ************************************ 00:06:42.253 START TEST skip_rpc_with_delay 00:06:42.253 ************************************ 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:42.253 17:27:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.513 [2024-11-20 17:27:24.072613] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:42.513 [2024-11-20 17:27:24.072739] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:42.513 17:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:42.513 17:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.513 17:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.513 17:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.513 00:06:42.513 real 0m0.098s 00:06:42.513 user 0m0.062s 00:06:42.513 sys 0m0.035s 00:06:42.513 17:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.513 17:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:42.513 ************************************ 00:06:42.513 END TEST skip_rpc_with_delay 00:06:42.513 ************************************ 00:06:42.513 17:27:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:42.513 17:27:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:42.513 17:27:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:42.513 17:27:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.513 17:27:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.513 17:27:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.513 ************************************ 00:06:42.513 START TEST exit_on_failed_rpc_init 00:06:42.513 ************************************ 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71982 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71982 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 71982 ']' 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.513 17:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:42.513 [2024-11-20 17:27:24.223648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.513 [2024-11-20 17:27:24.223742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71982 ] 00:06:42.773 [2024-11-20 17:27:24.360928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.773 [2024-11-20 17:27:24.423829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:43.710 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.710 [2024-11-20 17:27:25.303199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.710 [2024-11-20 17:27:25.303300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72018 ] 00:06:43.710 [2024-11-20 17:27:25.443654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.970 [2024-11-20 17:27:25.508050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.970 [2024-11-20 17:27:25.508175] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:43.970 [2024-11-20 17:27:25.508192] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:43.970 [2024-11-20 17:27:25.508202] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71982 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 71982 ']' 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 71982 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71982 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.970 killing process with pid 71982 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71982' 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 71982 00:06:43.970 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 71982 00:06:44.229 00:06:44.229 real 0m1.837s 00:06:44.229 user 0m2.115s 00:06:44.229 sys 0m0.460s 00:06:44.229 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.229 17:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:44.229 ************************************ 00:06:44.229 END TEST exit_on_failed_rpc_init 00:06:44.229 ************************************ 00:06:44.488 17:27:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:44.488 00:06:44.488 real 0m15.198s 00:06:44.488 user 0m14.371s 00:06:44.488 sys 0m1.900s 00:06:44.488 17:27:26 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.488 17:27:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.488 ************************************ 00:06:44.488 END TEST skip_rpc 00:06:44.488 ************************************ 00:06:44.488 17:27:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:44.488 17:27:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.488 17:27:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.488 17:27:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.488 ************************************ 00:06:44.488 START TEST rpc_client 00:06:44.488 ************************************ 00:06:44.488 17:27:26 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:44.488 * Looking for test storage... 00:06:44.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:44.488 17:27:26 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.488 17:27:26 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.488 17:27:26 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.488 17:27:26 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.488 17:27:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.748 17:27:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:44.748 17:27:26 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.748 17:27:26 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.748 --rc genhtml_branch_coverage=1 00:06:44.748 --rc genhtml_function_coverage=1 00:06:44.748 --rc genhtml_legend=1 00:06:44.748 --rc geninfo_all_blocks=1 00:06:44.748 --rc geninfo_unexecuted_blocks=1 00:06:44.748 00:06:44.748 ' 00:06:44.748 17:27:26 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.748 --rc genhtml_branch_coverage=1 00:06:44.748 --rc genhtml_function_coverage=1 00:06:44.748 --rc genhtml_legend=1 00:06:44.748 --rc geninfo_all_blocks=1 00:06:44.748 --rc geninfo_unexecuted_blocks=1 00:06:44.748 00:06:44.748 ' 00:06:44.748 17:27:26 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.748 --rc genhtml_branch_coverage=1 00:06:44.748 --rc genhtml_function_coverage=1 00:06:44.748 --rc genhtml_legend=1 00:06:44.748 --rc geninfo_all_blocks=1 00:06:44.748 --rc geninfo_unexecuted_blocks=1 00:06:44.748 00:06:44.748 ' 00:06:44.748 17:27:26 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.748 --rc genhtml_branch_coverage=1 00:06:44.748 --rc genhtml_function_coverage=1 00:06:44.748 --rc genhtml_legend=1 00:06:44.748 --rc geninfo_all_blocks=1 00:06:44.748 --rc geninfo_unexecuted_blocks=1 00:06:44.748 00:06:44.748 ' 00:06:44.749 17:27:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:44.749 OK 00:06:44.749 17:27:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:44.749 00:06:44.749 real 0m0.221s 00:06:44.749 user 0m0.142s 00:06:44.749 sys 0m0.090s 00:06:44.749 17:27:26 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.749 17:27:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:44.749 ************************************ 00:06:44.749 END TEST rpc_client 00:06:44.749 ************************************ 00:06:44.749 17:27:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:44.749 17:27:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.749 17:27:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.749 17:27:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.749 ************************************ 00:06:44.749 START TEST json_config 00:06:44.749 ************************************ 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.749 17:27:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.749 17:27:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.749 17:27:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.749 17:27:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.749 17:27:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.749 17:27:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.749 17:27:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.749 17:27:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:44.749 17:27:26 json_config -- scripts/common.sh@345 -- # : 1 00:06:44.749 17:27:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.749 17:27:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.749 17:27:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:44.749 17:27:26 json_config -- scripts/common.sh@353 -- # local d=1 00:06:44.749 17:27:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.749 17:27:26 json_config -- scripts/common.sh@355 -- # echo 1 00:06:44.749 17:27:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.749 17:27:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@353 -- # local d=2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.749 17:27:26 json_config -- scripts/common.sh@355 -- # echo 2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.749 17:27:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.749 17:27:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.749 17:27:26 json_config -- scripts/common.sh@368 -- # return 0 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.749 --rc genhtml_branch_coverage=1 00:06:44.749 --rc genhtml_function_coverage=1 00:06:44.749 --rc genhtml_legend=1 00:06:44.749 --rc geninfo_all_blocks=1 00:06:44.749 --rc geninfo_unexecuted_blocks=1 00:06:44.749 00:06:44.749 ' 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.749 --rc genhtml_branch_coverage=1 00:06:44.749 --rc genhtml_function_coverage=1 00:06:44.749 --rc genhtml_legend=1 00:06:44.749 --rc geninfo_all_blocks=1 00:06:44.749 --rc geninfo_unexecuted_blocks=1 00:06:44.749 00:06:44.749 ' 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.749 --rc genhtml_branch_coverage=1 00:06:44.749 --rc genhtml_function_coverage=1 00:06:44.749 --rc genhtml_legend=1 00:06:44.749 --rc geninfo_all_blocks=1 00:06:44.749 --rc geninfo_unexecuted_blocks=1 00:06:44.749 00:06:44.749 ' 00:06:44.749 17:27:26 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.749 --rc genhtml_branch_coverage=1 00:06:44.749 --rc genhtml_function_coverage=1 00:06:44.749 --rc genhtml_legend=1 00:06:44.749 --rc geninfo_all_blocks=1 00:06:44.749 --rc geninfo_unexecuted_blocks=1 00:06:44.749 00:06:44.749 ' 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.749 17:27:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.749 17:27:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.749 17:27:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.749 17:27:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.749 17:27:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.749 17:27:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.749 17:27:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.749 17:27:26 json_config -- paths/export.sh@5 -- # export PATH 00:06:44.749 17:27:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@51 -- # : 0 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.749 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.749 17:27:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:44.749 17:27:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:44.750 17:27:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:44.750 17:27:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:44.750 17:27:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:44.750 INFO: JSON configuration test init 00:06:44.750 17:27:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:44.750 17:27:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:44.750 17:27:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:44.750 17:27:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.750 17:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.750 17:27:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:44.750 17:27:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.750 17:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.009 17:27:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:45.009 17:27:26 json_config -- json_config/common.sh@9 -- # local app=target 00:06:45.009 17:27:26 json_config -- json_config/common.sh@10 -- # shift 00:06:45.009 17:27:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:45.009 17:27:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:45.009 17:27:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:45.009 17:27:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.009 17:27:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.009 17:27:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72152 00:06:45.009 17:27:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:45.009 Waiting for target to run... 00:06:45.009 17:27:26 json_config -- json_config/common.sh@25 -- # waitforlisten 72152 /var/tmp/spdk_tgt.sock 00:06:45.009 17:27:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 72152 ']' 00:06:45.009 17:27:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:45.009 17:27:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:45.009 17:27:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:45.009 17:27:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:45.009 17:27:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.009 17:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.009 [2024-11-20 17:27:26.612624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.009 [2024-11-20 17:27:26.612722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72152 ] 00:06:45.577 [2024-11-20 17:27:27.076195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.577 [2024-11-20 17:27:27.174095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.837 17:27:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.837 17:27:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:45.837 00:06:45.837 17:27:27 json_config -- json_config/common.sh@26 -- # echo '' 00:06:45.837 17:27:27 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:45.837 17:27:27 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:45.837 17:27:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.837 17:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.837 17:27:27 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:45.837 17:27:27 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:45.837 17:27:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.837 17:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.096 17:27:27 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:46.096 17:27:27 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:46.096 17:27:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:46.355 17:27:28 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:46.355 17:27:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:46.355 17:27:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.355 17:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.355 17:27:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:46.355 17:27:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:46.355 17:27:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:46.355 17:27:28 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:46.355 17:27:28 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:46.614 17:27:28 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:46.614 17:27:28 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:46.614 17:27:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@54 -- # sort 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:46.873 17:27:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.873 17:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:46.873 17:27:28 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:46.873 17:27:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.874 17:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.874 17:27:28 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:46.874 17:27:28 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:46.874 17:27:28 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:46.874 17:27:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:46.874 17:27:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:47.132 MallocForNvmf0 00:06:47.132 17:27:28 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:47.132 17:27:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:47.391 MallocForNvmf1 00:06:47.391 17:27:29 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:47.391 17:27:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:47.660 [2024-11-20 17:27:29.243318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.660 17:27:29 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:47.660 17:27:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:47.932 17:27:29 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:47.932 17:27:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:48.191 17:27:29 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:48.191 17:27:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:48.449 17:27:30 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:48.449 17:27:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:48.449 [2024-11-20 17:27:30.223923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:48.708 17:27:30 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:48.708 17:27:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.708 17:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.708 17:27:30 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:48.708 17:27:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.708 17:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.708 17:27:30 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:48.708 17:27:30 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:48.708 17:27:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:48.966 MallocBdevForConfigChangeCheck 00:06:48.966 17:27:30 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:48.966 17:27:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.966 17:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.966 17:27:30 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:48.966 17:27:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:49.225 INFO: shutting down applications... 00:06:49.225 17:27:31 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:49.225 17:27:31 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:49.225 17:27:31 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:49.225 17:27:31 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:49.225 17:27:31 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:49.794 Calling clear_iscsi_subsystem 00:06:49.794 Calling clear_nvmf_subsystem 00:06:49.794 Calling clear_nbd_subsystem 00:06:49.794 Calling clear_ublk_subsystem 00:06:49.794 Calling clear_vhost_blk_subsystem 00:06:49.794 Calling clear_vhost_scsi_subsystem 00:06:49.794 Calling clear_bdev_subsystem 00:06:49.794 17:27:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:49.794 17:27:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:49.794 17:27:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:49.794 17:27:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:49.794 17:27:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:49.794 17:27:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:50.053 17:27:31 json_config -- json_config/json_config.sh@352 -- # break 00:06:50.053 17:27:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:50.053 17:27:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:50.053 17:27:31 json_config -- json_config/common.sh@31 -- # local app=target 00:06:50.053 17:27:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:50.053 17:27:31 json_config -- json_config/common.sh@35 -- # [[ -n 72152 ]] 00:06:50.053 17:27:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72152 00:06:50.053 17:27:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:50.053 17:27:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.053 17:27:31 json_config -- json_config/common.sh@41 -- # kill -0 72152 00:06:50.053 17:27:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.622 17:27:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.622 17:27:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.622 17:27:32 json_config -- json_config/common.sh@41 -- # kill -0 72152 00:06:50.622 17:27:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:50.622 17:27:32 json_config -- json_config/common.sh@43 -- # break 00:06:50.622 17:27:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:50.622 SPDK target shutdown done 00:06:50.622 17:27:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:50.622 INFO: relaunching applications... 00:06:50.622 17:27:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:50.622 17:27:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:50.622 17:27:32 json_config -- json_config/common.sh@9 -- # local app=target 00:06:50.622 17:27:32 json_config -- json_config/common.sh@10 -- # shift 00:06:50.622 17:27:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:50.622 17:27:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:50.622 17:27:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:50.622 17:27:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.622 17:27:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.622 17:27:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72431 00:06:50.622 17:27:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:50.622 17:27:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:50.622 Waiting for target to run... 00:06:50.622 17:27:32 json_config -- json_config/common.sh@25 -- # waitforlisten 72431 /var/tmp/spdk_tgt.sock 00:06:50.622 17:27:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 72431 ']' 00:06:50.622 17:27:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:50.622 17:27:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:50.622 17:27:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:50.622 17:27:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.622 17:27:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.622 [2024-11-20 17:27:32.285475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:50.622 [2024-11-20 17:27:32.285567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72431 ] 00:06:51.190 [2024-11-20 17:27:32.787990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.190 [2024-11-20 17:27:32.855665] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.449 [2024-11-20 17:27:33.185325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.449 [2024-11-20 17:27:33.217387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:51.708 17:27:33 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.708 00:06:51.708 17:27:33 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:51.708 17:27:33 json_config -- json_config/common.sh@26 -- # echo '' 00:06:51.708 17:27:33 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:51.708 INFO: Checking if target configuration is the same... 00:06:51.708 17:27:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:51.708 17:27:33 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:51.708 17:27:33 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:51.708 17:27:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.708 + '[' 2 -ne 2 ']' 00:06:51.708 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:51.708 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:51.708 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:51.708 +++ basename /dev/fd/62 00:06:51.708 ++ mktemp /tmp/62.XXX 00:06:51.708 + tmp_file_1=/tmp/62.OIS 00:06:51.708 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:51.708 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:51.708 + tmp_file_2=/tmp/spdk_tgt_config.json.Tk0 00:06:51.708 + ret=0 00:06:51.708 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:51.966 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:52.225 + diff -u /tmp/62.OIS /tmp/spdk_tgt_config.json.Tk0 00:06:52.225 INFO: JSON config files are the same 00:06:52.225 + echo 'INFO: JSON config files are the same' 00:06:52.225 + rm /tmp/62.OIS /tmp/spdk_tgt_config.json.Tk0 00:06:52.225 + exit 0 00:06:52.225 17:27:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:52.225 INFO: changing configuration and checking if this can be detected... 00:06:52.225 17:27:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:52.225 17:27:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:52.225 17:27:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:52.485 17:27:34 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:52.485 17:27:34 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:52.485 17:27:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:52.485 + '[' 2 -ne 2 ']' 00:06:52.485 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:52.485 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:52.485 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:52.485 +++ basename /dev/fd/62 00:06:52.485 ++ mktemp /tmp/62.XXX 00:06:52.485 + tmp_file_1=/tmp/62.GwJ 00:06:52.485 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:52.485 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:52.485 + tmp_file_2=/tmp/spdk_tgt_config.json.BgY 00:06:52.485 + ret=0 00:06:52.485 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:53.053 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:53.053 + diff -u /tmp/62.GwJ /tmp/spdk_tgt_config.json.BgY 00:06:53.053 + ret=1 00:06:53.053 + echo '=== Start of file: /tmp/62.GwJ ===' 00:06:53.053 + cat /tmp/62.GwJ 00:06:53.053 + echo '=== End of file: /tmp/62.GwJ ===' 00:06:53.053 + echo '' 00:06:53.053 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BgY ===' 00:06:53.053 + cat /tmp/spdk_tgt_config.json.BgY 00:06:53.053 + echo '=== End of file: /tmp/spdk_tgt_config.json.BgY ===' 00:06:53.053 + echo '' 00:06:53.053 + rm /tmp/62.GwJ /tmp/spdk_tgt_config.json.BgY 00:06:53.053 + exit 1 00:06:53.053 INFO: configuration change detected. 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@324 -- # [[ -n 72431 ]] 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.053 17:27:34 json_config -- json_config/json_config.sh@330 -- # killprocess 72431 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@950 -- # '[' -z 72431 ']' 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@954 -- # kill -0 72431 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@955 -- # uname 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72431 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.053 killing process with pid 72431 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72431' 00:06:53.053 17:27:34 json_config -- common/autotest_common.sh@969 -- # kill 72431 00:06:53.054 17:27:34 json_config -- common/autotest_common.sh@974 -- # wait 72431 00:06:53.313 17:27:34 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:53.313 17:27:34 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:53.313 17:27:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.313 17:27:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.313 INFO: Success 00:06:53.313 17:27:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:53.313 17:27:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:53.313 00:06:53.313 real 0m8.645s 00:06:53.313 user 0m12.149s 00:06:53.313 sys 0m2.028s 00:06:53.313 17:27:35 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.313 ************************************ 00:06:53.313 END TEST json_config 00:06:53.313 ************************************ 00:06:53.313 17:27:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.313 17:27:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:53.313 17:27:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.313 17:27:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.313 17:27:35 -- common/autotest_common.sh@10 -- # set +x 00:06:53.313 ************************************ 00:06:53.313 START TEST json_config_extra_key 00:06:53.313 ************************************ 00:06:53.313 17:27:35 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.573 --rc genhtml_branch_coverage=1 00:06:53.573 --rc genhtml_function_coverage=1 00:06:53.573 --rc genhtml_legend=1 00:06:53.573 --rc geninfo_all_blocks=1 00:06:53.573 --rc geninfo_unexecuted_blocks=1 00:06:53.573 00:06:53.573 ' 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.573 --rc genhtml_branch_coverage=1 00:06:53.573 --rc genhtml_function_coverage=1 00:06:53.573 --rc genhtml_legend=1 00:06:53.573 --rc geninfo_all_blocks=1 00:06:53.573 --rc geninfo_unexecuted_blocks=1 00:06:53.573 00:06:53.573 ' 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.573 --rc genhtml_branch_coverage=1 00:06:53.573 --rc genhtml_function_coverage=1 00:06:53.573 --rc genhtml_legend=1 00:06:53.573 --rc geninfo_all_blocks=1 00:06:53.573 --rc geninfo_unexecuted_blocks=1 00:06:53.573 00:06:53.573 ' 00:06:53.573 17:27:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.573 --rc genhtml_branch_coverage=1 00:06:53.573 --rc genhtml_function_coverage=1 00:06:53.573 --rc genhtml_legend=1 00:06:53.573 --rc geninfo_all_blocks=1 00:06:53.573 --rc geninfo_unexecuted_blocks=1 00:06:53.573 00:06:53.573 ' 00:06:53.573 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.573 17:27:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.573 17:27:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.574 17:27:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.574 17:27:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.574 17:27:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.574 17:27:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:53.574 17:27:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.574 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.574 17:27:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.574 INFO: launching applications... 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:53.574 17:27:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=72615 00:06:53.574 Waiting for target to run... 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 72615 /var/tmp/spdk_tgt.sock 00:06:53.574 17:27:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:53.574 17:27:35 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 72615 ']' 00:06:53.574 17:27:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.574 17:27:35 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.574 17:27:35 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.574 17:27:35 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.574 17:27:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.574 [2024-11-20 17:27:35.331631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.574 [2024-11-20 17:27:35.331744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72615 ] 00:06:54.142 [2024-11-20 17:27:35.883389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.401 [2024-11-20 17:27:35.951453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.660 17:27:36 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.660 17:27:36 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:54.660 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:54.660 INFO: shutting down applications... 00:06:54.660 17:27:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:54.660 17:27:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 72615 ]] 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 72615 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72615 00:06:54.660 17:27:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:55.227 17:27:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:55.227 17:27:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.227 17:27:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72615 00:06:55.227 17:27:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:55.227 17:27:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:55.227 17:27:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:55.227 SPDK target shutdown done 00:06:55.227 17:27:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:55.227 Success 00:06:55.227 17:27:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:55.227 00:06:55.227 real 0m1.744s 00:06:55.227 user 0m1.454s 00:06:55.227 sys 0m0.603s 00:06:55.227 17:27:36 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.227 ************************************ 00:06:55.227 END TEST json_config_extra_key 00:06:55.227 ************************************ 00:06:55.227 17:27:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:55.227 17:27:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.227 17:27:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.227 17:27:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.227 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:06:55.227 ************************************ 00:06:55.227 START TEST alias_rpc 00:06:55.227 ************************************ 00:06:55.227 17:27:36 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.227 * Looking for test storage... 00:06:55.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:55.227 17:27:36 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.227 17:27:36 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.227 17:27:36 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.486 17:27:37 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:55.486 17:27:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.487 17:27:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.487 --rc genhtml_branch_coverage=1 00:06:55.487 --rc genhtml_function_coverage=1 00:06:55.487 --rc genhtml_legend=1 00:06:55.487 --rc geninfo_all_blocks=1 00:06:55.487 --rc geninfo_unexecuted_blocks=1 00:06:55.487 00:06:55.487 ' 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.487 --rc genhtml_branch_coverage=1 00:06:55.487 --rc genhtml_function_coverage=1 00:06:55.487 --rc genhtml_legend=1 00:06:55.487 --rc geninfo_all_blocks=1 00:06:55.487 --rc geninfo_unexecuted_blocks=1 00:06:55.487 00:06:55.487 ' 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.487 --rc genhtml_branch_coverage=1 00:06:55.487 --rc genhtml_function_coverage=1 00:06:55.487 --rc genhtml_legend=1 00:06:55.487 --rc geninfo_all_blocks=1 00:06:55.487 --rc geninfo_unexecuted_blocks=1 00:06:55.487 00:06:55.487 ' 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.487 --rc genhtml_branch_coverage=1 00:06:55.487 --rc genhtml_function_coverage=1 00:06:55.487 --rc genhtml_legend=1 00:06:55.487 --rc geninfo_all_blocks=1 00:06:55.487 --rc geninfo_unexecuted_blocks=1 00:06:55.487 00:06:55.487 ' 00:06:55.487 17:27:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.487 17:27:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=72700 00:06:55.487 17:27:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 72700 00:06:55.487 17:27:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 72700 ']' 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.487 17:27:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.487 [2024-11-20 17:27:37.123804] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.487 [2024-11-20 17:27:37.123918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72700 ] 00:06:55.487 [2024-11-20 17:27:37.263346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.746 [2024-11-20 17:27:37.323713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.004 17:27:37 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.004 17:27:37 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.004 17:27:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:56.263 17:27:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 72700 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 72700 ']' 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 72700 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72700 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.263 killing process with pid 72700 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72700' 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@969 -- # kill 72700 00:06:56.263 17:27:37 alias_rpc -- common/autotest_common.sh@974 -- # wait 72700 00:06:56.523 00:06:56.523 real 0m1.429s 00:06:56.523 user 0m1.490s 00:06:56.523 sys 0m0.464s 00:06:56.523 17:27:38 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.523 ************************************ 00:06:56.523 END TEST alias_rpc 00:06:56.523 ************************************ 00:06:56.523 17:27:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.782 17:27:38 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:06:56.782 17:27:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:56.782 17:27:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.782 17:27:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.782 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:06:56.782 ************************************ 00:06:56.782 START TEST dpdk_mem_utility 00:06:56.782 ************************************ 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:56.782 * Looking for test storage... 00:06:56.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.782 17:27:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:56.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.782 --rc genhtml_branch_coverage=1 00:06:56.782 --rc genhtml_function_coverage=1 00:06:56.782 --rc genhtml_legend=1 00:06:56.782 --rc geninfo_all_blocks=1 00:06:56.782 --rc geninfo_unexecuted_blocks=1 00:06:56.782 00:06:56.782 ' 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:56.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.782 --rc genhtml_branch_coverage=1 00:06:56.782 --rc genhtml_function_coverage=1 00:06:56.782 --rc genhtml_legend=1 00:06:56.782 --rc geninfo_all_blocks=1 00:06:56.782 --rc geninfo_unexecuted_blocks=1 00:06:56.782 00:06:56.782 ' 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:56.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.782 --rc genhtml_branch_coverage=1 00:06:56.782 --rc genhtml_function_coverage=1 00:06:56.782 --rc genhtml_legend=1 00:06:56.782 --rc geninfo_all_blocks=1 00:06:56.782 --rc geninfo_unexecuted_blocks=1 00:06:56.782 00:06:56.782 ' 00:06:56.782 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:56.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.782 --rc genhtml_branch_coverage=1 00:06:56.782 --rc genhtml_function_coverage=1 00:06:56.782 --rc genhtml_legend=1 00:06:56.782 --rc geninfo_all_blocks=1 00:06:56.782 --rc geninfo_unexecuted_blocks=1 00:06:56.782 00:06:56.782 ' 00:06:56.782 17:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:56.782 17:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72792 00:06:56.783 17:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72792 00:06:56.783 17:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:56.783 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 72792 ']' 00:06:56.783 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.783 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.783 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.783 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.783 17:27:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:57.042 [2024-11-20 17:27:38.614464] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.042 [2024-11-20 17:27:38.614578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72792 ] 00:06:57.042 [2024-11-20 17:27:38.752322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.042 [2024-11-20 17:27:38.808587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.301 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.301 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:57.301 17:27:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:57.301 17:27:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:57.301 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.301 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:57.301 { 00:06:57.301 "filename": "/tmp/spdk_mem_dump.txt" 00:06:57.301 } 00:06:57.301 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.301 17:27:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:57.572 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:57.572 1 heaps totaling size 860.000000 MiB 00:06:57.572 size: 860.000000 MiB heap id: 0 00:06:57.572 end heaps---------- 00:06:57.572 9 mempools totaling size 642.649841 MiB 00:06:57.572 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:57.572 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:57.572 size: 92.545471 MiB name: bdev_io_72792 00:06:57.572 size: 51.011292 MiB name: evtpool_72792 00:06:57.572 size: 50.003479 MiB name: msgpool_72792 00:06:57.572 size: 36.509338 MiB name: fsdev_io_72792 00:06:57.572 size: 21.763794 MiB name: PDU_Pool 00:06:57.572 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:57.572 size: 0.026123 MiB name: Session_Pool 00:06:57.572 end mempools------- 00:06:57.572 6 memzones totaling size 4.142822 MiB 00:06:57.572 size: 1.000366 MiB name: RG_ring_0_72792 00:06:57.572 size: 1.000366 MiB name: RG_ring_1_72792 00:06:57.572 size: 1.000366 MiB name: RG_ring_4_72792 00:06:57.572 size: 1.000366 MiB name: RG_ring_5_72792 00:06:57.572 size: 0.125366 MiB name: RG_ring_2_72792 00:06:57.572 size: 0.015991 MiB name: RG_ring_3_72792 00:06:57.572 end memzones------- 00:06:57.572 17:27:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:57.572 heap id: 0 total size: 860.000000 MiB number of busy elements: 269 number of free elements: 16 00:06:57.572 list of free elements. size: 13.943481 MiB 00:06:57.572 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:57.572 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:57.572 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:57.572 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:57.572 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:57.572 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:57.572 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:57.572 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:57.572 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:57.572 element at address: 0x20001d800000 with size: 0.573364 MiB 00:06:57.572 element at address: 0x20000d800000 with size: 0.489441 MiB 00:06:57.572 element at address: 0x200003e00000 with size: 0.488281 MiB 00:06:57.572 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:57.572 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:57.572 element at address: 0x20002ac00000 with size: 0.398499 MiB 00:06:57.572 element at address: 0x200003a00000 with size: 0.351562 MiB 00:06:57.572 list of standard malloc elements. size: 199.259827 MiB 00:06:57.572 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:57.572 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:57.572 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:57.572 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:57.572 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:57.572 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:57.572 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:57.572 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:57.572 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:57.572 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a5a000 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a5e4c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7e780 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7e840 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7e900 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:57.572 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:57.573 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac66040 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac66100 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6cd00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:57.573 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:57.573 list of memzone associated elements. size: 646.796692 MiB 00:06:57.573 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:57.573 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:57.573 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:57.573 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:57.573 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:57.573 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_72792_0 00:06:57.573 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:57.573 associated memzone info: size: 48.002930 MiB name: MP_evtpool_72792_0 00:06:57.573 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:57.573 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72792_0 00:06:57.573 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:57.573 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_72792_0 00:06:57.573 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:57.573 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:57.573 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:57.573 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:57.573 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:57.573 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_72792 00:06:57.573 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:57.573 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72792 00:06:57.573 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:57.573 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72792 00:06:57.573 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:57.573 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:57.573 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:57.573 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:57.573 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:57.573 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:57.573 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:57.573 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:57.573 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:57.573 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72792 00:06:57.573 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:57.573 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72792 00:06:57.573 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:57.573 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72792 00:06:57.573 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:57.573 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72792 00:06:57.573 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:57.573 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_72792 00:06:57.573 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:57.573 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72792 00:06:57.573 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:57.573 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:57.573 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:57.573 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:57.573 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:57.573 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:57.573 element at address: 0x200003a5e580 with size: 0.125488 MiB 00:06:57.573 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72792 00:06:57.573 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:57.573 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:57.573 element at address: 0x20002ac661c0 with size: 0.023743 MiB 00:06:57.573 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:57.573 element at address: 0x200003a5a2c0 with size: 0.016113 MiB 00:06:57.573 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72792 00:06:57.573 element at address: 0x20002ac6c300 with size: 0.002441 MiB 00:06:57.573 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:57.573 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:57.573 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72792 00:06:57.573 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:57.573 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_72792 00:06:57.573 element at address: 0x200003a5a0c0 with size: 0.000305 MiB 00:06:57.573 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72792 00:06:57.573 element at address: 0x20002ac6cdc0 with size: 0.000305 MiB 00:06:57.573 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:57.573 17:27:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:57.573 17:27:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72792 00:06:57.573 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 72792 ']' 00:06:57.573 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 72792 00:06:57.573 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:57.573 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.573 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72792 00:06:57.573 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.573 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.573 killing process with pid 72792 00:06:57.574 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72792' 00:06:57.574 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 72792 00:06:57.574 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 72792 00:06:58.140 00:06:58.140 real 0m1.293s 00:06:58.140 user 0m1.223s 00:06:58.140 sys 0m0.447s 00:06:58.140 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.140 17:27:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.140 ************************************ 00:06:58.140 END TEST dpdk_mem_utility 00:06:58.140 ************************************ 00:06:58.140 17:27:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:58.140 17:27:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.140 17:27:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.140 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:06:58.140 ************************************ 00:06:58.140 START TEST event 00:06:58.140 ************************************ 00:06:58.140 17:27:39 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:58.140 * Looking for test storage... 00:06:58.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:58.140 17:27:39 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.140 17:27:39 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.140 17:27:39 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.140 17:27:39 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.140 17:27:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.140 17:27:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.140 17:27:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.140 17:27:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.140 17:27:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.140 17:27:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.140 17:27:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.140 17:27:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.140 17:27:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.140 17:27:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.140 17:27:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.140 17:27:39 event -- scripts/common.sh@344 -- # case "$op" in 00:06:58.140 17:27:39 event -- scripts/common.sh@345 -- # : 1 00:06:58.140 17:27:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.140 17:27:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.140 17:27:39 event -- scripts/common.sh@365 -- # decimal 1 00:06:58.140 17:27:39 event -- scripts/common.sh@353 -- # local d=1 00:06:58.140 17:27:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.140 17:27:39 event -- scripts/common.sh@355 -- # echo 1 00:06:58.141 17:27:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.141 17:27:39 event -- scripts/common.sh@366 -- # decimal 2 00:06:58.141 17:27:39 event -- scripts/common.sh@353 -- # local d=2 00:06:58.141 17:27:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.141 17:27:39 event -- scripts/common.sh@355 -- # echo 2 00:06:58.141 17:27:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.141 17:27:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.141 17:27:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.141 17:27:39 event -- scripts/common.sh@368 -- # return 0 00:06:58.141 17:27:39 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.141 17:27:39 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.141 --rc genhtml_branch_coverage=1 00:06:58.141 --rc genhtml_function_coverage=1 00:06:58.141 --rc genhtml_legend=1 00:06:58.141 --rc geninfo_all_blocks=1 00:06:58.141 --rc geninfo_unexecuted_blocks=1 00:06:58.141 00:06:58.141 ' 00:06:58.141 17:27:39 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.141 --rc genhtml_branch_coverage=1 00:06:58.141 --rc genhtml_function_coverage=1 00:06:58.141 --rc genhtml_legend=1 00:06:58.141 --rc geninfo_all_blocks=1 00:06:58.141 --rc geninfo_unexecuted_blocks=1 00:06:58.141 00:06:58.141 ' 00:06:58.141 17:27:39 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.141 --rc genhtml_branch_coverage=1 00:06:58.141 --rc genhtml_function_coverage=1 00:06:58.141 --rc genhtml_legend=1 00:06:58.141 --rc geninfo_all_blocks=1 00:06:58.141 --rc geninfo_unexecuted_blocks=1 00:06:58.141 00:06:58.141 ' 00:06:58.141 17:27:39 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.141 --rc genhtml_branch_coverage=1 00:06:58.141 --rc genhtml_function_coverage=1 00:06:58.141 --rc genhtml_legend=1 00:06:58.141 --rc geninfo_all_blocks=1 00:06:58.141 --rc geninfo_unexecuted_blocks=1 00:06:58.141 00:06:58.141 ' 00:06:58.141 17:27:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:58.141 17:27:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:58.141 17:27:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:58.141 17:27:39 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:58.141 17:27:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.141 17:27:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.141 ************************************ 00:06:58.141 START TEST event_perf 00:06:58.141 ************************************ 00:06:58.141 17:27:39 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:58.141 Running I/O for 1 seconds...[2024-11-20 17:27:39.913370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.141 [2024-11-20 17:27:39.913605] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72876 ] 00:06:58.400 [2024-11-20 17:27:40.051324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.400 [2024-11-20 17:27:40.110361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.400 [2024-11-20 17:27:40.110516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.400 Running I/O for 1 seconds...[2024-11-20 17:27:40.110644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.400 [2024-11-20 17:27:40.110644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.776 00:06:59.776 lcore 0: 122007 00:06:59.776 lcore 1: 122003 00:06:59.776 lcore 2: 122005 00:06:59.776 lcore 3: 122007 00:06:59.776 done. 00:06:59.776 00:06:59.776 ************************************ 00:06:59.776 END TEST event_perf 00:06:59.776 ************************************ 00:06:59.776 real 0m1.276s 00:06:59.776 user 0m4.080s 00:06:59.776 sys 0m0.063s 00:06:59.776 17:27:41 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.776 17:27:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.776 17:27:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:59.777 17:27:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:59.777 17:27:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.777 17:27:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.777 ************************************ 00:06:59.777 START TEST event_reactor 00:06:59.777 ************************************ 00:06:59.777 17:27:41 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:59.777 [2024-11-20 17:27:41.237812] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.777 [2024-11-20 17:27:41.238870] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72913 ] 00:06:59.777 [2024-11-20 17:27:41.374448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.777 [2024-11-20 17:27:41.428873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.713 test_start 00:07:00.713 oneshot 00:07:00.713 tick 100 00:07:00.713 tick 100 00:07:00.713 tick 250 00:07:00.713 tick 100 00:07:00.713 tick 100 00:07:00.713 tick 250 00:07:00.713 tick 100 00:07:00.713 tick 500 00:07:00.713 tick 100 00:07:00.713 tick 100 00:07:00.713 tick 250 00:07:00.713 tick 100 00:07:00.713 tick 100 00:07:00.713 test_end 00:07:00.713 00:07:00.713 real 0m1.259s 00:07:00.713 user 0m1.092s 00:07:00.713 sys 0m0.058s 00:07:00.713 17:27:42 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.713 17:27:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:00.713 ************************************ 00:07:00.713 END TEST event_reactor 00:07:00.713 ************************************ 00:07:00.973 17:27:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:00.973 17:27:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:00.973 17:27:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.973 17:27:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.973 ************************************ 00:07:00.973 START TEST event_reactor_perf 00:07:00.973 ************************************ 00:07:00.973 17:27:42 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:00.973 [2024-11-20 17:27:42.558366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.973 [2024-11-20 17:27:42.558476] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72944 ] 00:07:00.973 [2024-11-20 17:27:42.695251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.231 [2024-11-20 17:27:42.761910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.167 test_start 00:07:02.167 test_end 00:07:02.167 Performance: 482538 events per second 00:07:02.167 00:07:02.167 real 0m1.275s 00:07:02.167 user 0m1.105s 00:07:02.167 sys 0m0.064s 00:07:02.167 17:27:43 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.167 ************************************ 00:07:02.167 END TEST event_reactor_perf 00:07:02.167 ************************************ 00:07:02.167 17:27:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.167 17:27:43 event -- event/event.sh@49 -- # uname -s 00:07:02.167 17:27:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:02.167 17:27:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:02.167 17:27:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.167 17:27:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.167 17:27:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.167 ************************************ 00:07:02.167 START TEST event_scheduler 00:07:02.167 ************************************ 00:07:02.167 17:27:43 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:02.167 * Looking for test storage... 00:07:02.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:02.425 17:27:43 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:02.425 17:27:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:02.425 17:27:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:02.425 17:27:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.425 17:27:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:02.425 17:27:44 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.425 17:27:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.425 --rc genhtml_branch_coverage=1 00:07:02.425 --rc genhtml_function_coverage=1 00:07:02.425 --rc genhtml_legend=1 00:07:02.425 --rc geninfo_all_blocks=1 00:07:02.425 --rc geninfo_unexecuted_blocks=1 00:07:02.425 00:07:02.425 ' 00:07:02.425 17:27:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.425 --rc genhtml_branch_coverage=1 00:07:02.425 --rc genhtml_function_coverage=1 00:07:02.425 --rc genhtml_legend=1 00:07:02.425 --rc geninfo_all_blocks=1 00:07:02.425 --rc geninfo_unexecuted_blocks=1 00:07:02.425 00:07:02.425 ' 00:07:02.425 17:27:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.425 --rc genhtml_branch_coverage=1 00:07:02.425 --rc genhtml_function_coverage=1 00:07:02.425 --rc genhtml_legend=1 00:07:02.425 --rc geninfo_all_blocks=1 00:07:02.425 --rc geninfo_unexecuted_blocks=1 00:07:02.425 00:07:02.425 ' 00:07:02.425 17:27:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.425 --rc genhtml_branch_coverage=1 00:07:02.425 --rc genhtml_function_coverage=1 00:07:02.425 --rc genhtml_legend=1 00:07:02.425 --rc geninfo_all_blocks=1 00:07:02.425 --rc geninfo_unexecuted_blocks=1 00:07:02.425 00:07:02.426 ' 00:07:02.426 17:27:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:02.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.426 17:27:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73014 00:07:02.426 17:27:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.426 17:27:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73014 00:07:02.426 17:27:44 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 73014 ']' 00:07:02.426 17:27:44 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.426 17:27:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:02.426 17:27:44 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.426 17:27:44 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.426 17:27:44 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.426 17:27:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.426 [2024-11-20 17:27:44.126324] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:02.426 [2024-11-20 17:27:44.126592] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73014 ] 00:07:02.684 [2024-11-20 17:27:44.268624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.684 [2024-11-20 17:27:44.363231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.684 [2024-11-20 17:27:44.363291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.684 [2024-11-20 17:27:44.363341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.684 [2024-11-20 17:27:44.363350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.620 17:27:45 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.620 17:27:45 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:03.620 17:27:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:03.620 17:27:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.620 17:27:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:03.620 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:03.620 POWER: Cannot set governor of lcore 0 to userspace 00:07:03.620 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:03.621 POWER: Cannot set governor of lcore 0 to performance 00:07:03.621 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:03.621 POWER: Cannot set governor of lcore 0 to userspace 00:07:03.621 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:03.621 POWER: Cannot set governor of lcore 0 to userspace 00:07:03.621 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:03.621 POWER: Unable to set Power Management Environment for lcore 0 00:07:03.621 [2024-11-20 17:27:45.157971] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:03.621 [2024-11-20 17:27:45.157984] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:03.621 [2024-11-20 17:27:45.157992] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:03.621 [2024-11-20 17:27:45.158003] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:03.621 [2024-11-20 17:27:45.158010] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:03.621 [2024-11-20 17:27:45.158016] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:03.621 17:27:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:03.621 17:27:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 [2024-11-20 17:27:45.277055] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:03.621 17:27:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:03.621 17:27:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.621 17:27:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 ************************************ 00:07:03.621 START TEST scheduler_create_thread 00:07:03.621 ************************************ 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 2 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 3 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 4 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 5 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 6 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 7 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 8 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 9 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.621 10 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.621 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.188 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.188 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:04.188 17:27:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:04.188 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.188 17:27:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.123 17:27:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.123 17:27:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:05.123 17:27:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.123 17:27:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.056 17:27:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.056 17:27:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:06.056 17:27:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:06.056 17:27:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.056 17:27:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.991 ************************************ 00:07:06.991 END TEST scheduler_create_thread 00:07:06.991 ************************************ 00:07:06.991 17:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.991 00:07:06.991 real 0m3.224s 00:07:06.991 user 0m0.019s 00:07:06.991 sys 0m0.007s 00:07:06.991 17:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.991 17:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.991 17:27:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:06.991 17:27:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73014 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 73014 ']' 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 73014 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73014 00:07:06.991 killing process with pid 73014 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73014' 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 73014 00:07:06.991 17:27:48 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 73014 00:07:07.249 [2024-11-20 17:27:48.894442] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:07.507 00:07:07.507 real 0m5.346s 00:07:07.507 user 0m10.908s 00:07:07.507 sys 0m0.466s 00:07:07.507 17:27:49 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.507 ************************************ 00:07:07.507 END TEST event_scheduler 00:07:07.507 ************************************ 00:07:07.507 17:27:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.507 17:27:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:07.507 17:27:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:07.507 17:27:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.507 17:27:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.507 17:27:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.507 ************************************ 00:07:07.507 START TEST app_repeat 00:07:07.507 ************************************ 00:07:07.507 17:27:49 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73137 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.507 Process app_repeat pid: 73137 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73137' 00:07:07.507 spdk_app_start Round 0 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:07.507 17:27:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73137 /var/tmp/spdk-nbd.sock 00:07:07.507 17:27:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73137 ']' 00:07:07.507 17:27:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.507 17:27:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.508 17:27:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.508 17:27:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.508 17:27:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.765 [2024-11-20 17:27:49.311099] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.765 [2024-11-20 17:27:49.311360] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73137 ] 00:07:07.765 [2024-11-20 17:27:49.451015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.765 [2024-11-20 17:27:49.516906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.765 [2024-11-20 17:27:49.516917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.024 17:27:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.024 17:27:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:08.024 17:27:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.283 Malloc0 00:07:08.283 17:27:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.542 Malloc1 00:07:08.542 17:27:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.542 17:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:08.800 /dev/nbd0 00:07:08.800 17:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.800 17:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.800 17:27:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:08.800 17:27:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:08.800 17:27:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.800 17:27:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.800 17:27:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.801 1+0 records in 00:07:08.801 1+0 records out 00:07:08.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218234 s, 18.8 MB/s 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.801 17:27:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:08.801 17:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.801 17:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.801 17:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:09.059 /dev/nbd1 00:07:09.059 17:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.059 17:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.059 17:27:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.059 17:27:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.060 1+0 records in 00:07:09.060 1+0 records out 00:07:09.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023014 s, 17.8 MB/s 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.060 17:27:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:09.060 17:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.060 17:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.060 17:27:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.060 17:27:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.060 17:27:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.627 17:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.627 { 00:07:09.627 "bdev_name": "Malloc0", 00:07:09.627 "nbd_device": "/dev/nbd0" 00:07:09.627 }, 00:07:09.627 { 00:07:09.627 "bdev_name": "Malloc1", 00:07:09.627 "nbd_device": "/dev/nbd1" 00:07:09.627 } 00:07:09.627 ]' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.628 { 00:07:09.628 "bdev_name": "Malloc0", 00:07:09.628 "nbd_device": "/dev/nbd0" 00:07:09.628 }, 00:07:09.628 { 00:07:09.628 "bdev_name": "Malloc1", 00:07:09.628 "nbd_device": "/dev/nbd1" 00:07:09.628 } 00:07:09.628 ]' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.628 /dev/nbd1' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.628 /dev/nbd1' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590748 s, 177 MB/s 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024299 s, 43.2 MB/s 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224757 s, 46.7 MB/s 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.628 17:27:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.886 17:27:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.143 17:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:10.401 17:27:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:10.401 17:27:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:10.659 17:27:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:10.918 [2024-11-20 17:27:52.571592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.918 [2024-11-20 17:27:52.615157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.918 [2024-11-20 17:27:52.615168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.918 [2024-11-20 17:27:52.670294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:10.918 [2024-11-20 17:27:52.670338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:14.202 17:27:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.202 spdk_app_start Round 1 00:07:14.202 17:27:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:14.202 17:27:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73137 /var/tmp/spdk-nbd.sock 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73137 ']' 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.202 17:27:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:14.202 17:27:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.202 Malloc0 00:07:14.202 17:27:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.459 Malloc1 00:07:14.459 17:27:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.459 17:27:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:14.717 /dev/nbd0 00:07:14.717 17:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:14.974 17:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.974 1+0 records in 00:07:14.974 1+0 records out 00:07:14.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281473 s, 14.6 MB/s 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.974 17:27:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:14.974 17:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.974 17:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.974 17:27:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:15.231 /dev/nbd1 00:07:15.231 17:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.231 17:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.231 1+0 records in 00:07:15.231 1+0 records out 00:07:15.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289595 s, 14.1 MB/s 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.231 17:27:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:15.231 17:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.231 17:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.231 17:27:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.231 17:27:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.231 17:27:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.488 { 00:07:15.488 "bdev_name": "Malloc0", 00:07:15.488 "nbd_device": "/dev/nbd0" 00:07:15.488 }, 00:07:15.488 { 00:07:15.488 "bdev_name": "Malloc1", 00:07:15.488 "nbd_device": "/dev/nbd1" 00:07:15.488 } 00:07:15.488 ]' 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.488 { 00:07:15.488 "bdev_name": "Malloc0", 00:07:15.488 "nbd_device": "/dev/nbd0" 00:07:15.488 }, 00:07:15.488 { 00:07:15.488 "bdev_name": "Malloc1", 00:07:15.488 "nbd_device": "/dev/nbd1" 00:07:15.488 } 00:07:15.488 ]' 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.488 /dev/nbd1' 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.488 /dev/nbd1' 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.488 17:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.489 17:27:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:15.489 256+0 records in 00:07:15.489 256+0 records out 00:07:15.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00975907 s, 107 MB/s 00:07:15.489 17:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.489 17:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.489 256+0 records in 00:07:15.489 256+0 records out 00:07:15.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250609 s, 41.8 MB/s 00:07:15.489 17:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.489 17:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.746 256+0 records in 00:07:15.746 256+0 records out 00:07:15.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298832 s, 35.1 MB/s 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.746 17:27:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.004 17:27:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.262 17:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:16.520 17:27:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:16.520 17:27:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:16.787 17:27:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:16.787 [2024-11-20 17:27:58.546099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.069 [2024-11-20 17:27:58.599207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.069 [2024-11-20 17:27:58.599221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.069 [2024-11-20 17:27:58.651938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.069 [2024-11-20 17:27:58.651998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:19.632 17:28:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:19.632 spdk_app_start Round 2 00:07:19.632 17:28:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:19.632 17:28:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73137 /var/tmp/spdk-nbd.sock 00:07:19.632 17:28:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73137 ']' 00:07:19.632 17:28:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.632 17:28:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.632 17:28:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.632 17:28:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.632 17:28:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.890 17:28:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.890 17:28:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:19.890 17:28:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.149 Malloc0 00:07:20.149 17:28:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.408 Malloc1 00:07:20.408 17:28:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.408 17:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:20.666 /dev/nbd0 00:07:20.666 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.666 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.666 1+0 records in 00:07:20.666 1+0 records out 00:07:20.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265716 s, 15.4 MB/s 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.666 17:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:20.666 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.666 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.666 17:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:21.232 /dev/nbd1 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.232 1+0 records in 00:07:21.232 1+0 records out 00:07:21.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279191 s, 14.7 MB/s 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.232 17:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.232 { 00:07:21.232 "bdev_name": "Malloc0", 00:07:21.232 "nbd_device": "/dev/nbd0" 00:07:21.232 }, 00:07:21.232 { 00:07:21.232 "bdev_name": "Malloc1", 00:07:21.232 "nbd_device": "/dev/nbd1" 00:07:21.232 } 00:07:21.232 ]' 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.232 { 00:07:21.232 "bdev_name": "Malloc0", 00:07:21.232 "nbd_device": "/dev/nbd0" 00:07:21.232 }, 00:07:21.232 { 00:07:21.232 "bdev_name": "Malloc1", 00:07:21.232 "nbd_device": "/dev/nbd1" 00:07:21.232 } 00:07:21.232 ]' 00:07:21.232 17:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.491 /dev/nbd1' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.491 /dev/nbd1' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:21.491 256+0 records in 00:07:21.491 256+0 records out 00:07:21.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633502 s, 166 MB/s 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.491 256+0 records in 00:07:21.491 256+0 records out 00:07:21.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233662 s, 44.9 MB/s 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.491 256+0 records in 00:07:21.491 256+0 records out 00:07:21.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272293 s, 38.5 MB/s 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.491 17:28:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.749 17:28:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.007 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.007 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.007 17:28:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.007 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.007 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.007 17:28:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.008 17:28:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.008 17:28:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.008 17:28:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.008 17:28:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.008 17:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.574 17:28:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.574 17:28:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:22.832 17:28:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:22.832 [2024-11-20 17:28:04.553548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.832 [2024-11-20 17:28:04.597887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.832 [2024-11-20 17:28:04.597898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.091 [2024-11-20 17:28:04.650499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.091 [2024-11-20 17:28:04.650561] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:25.622 17:28:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73137 /var/tmp/spdk-nbd.sock 00:07:25.622 17:28:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73137 ']' 00:07:25.622 17:28:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.622 17:28:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.622 17:28:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.622 17:28:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.622 17:28:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 17:28:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.880 17:28:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:25.880 17:28:07 event.app_repeat -- event/event.sh@39 -- # killprocess 73137 00:07:25.880 17:28:07 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 73137 ']' 00:07:25.880 17:28:07 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 73137 00:07:25.880 17:28:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:25.880 17:28:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.880 17:28:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73137 00:07:26.139 17:28:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.139 17:28:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.139 killing process with pid 73137 00:07:26.139 17:28:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73137' 00:07:26.139 17:28:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 73137 00:07:26.139 17:28:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 73137 00:07:26.139 spdk_app_start is called in Round 0. 00:07:26.139 Shutdown signal received, stop current app iteration 00:07:26.139 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:26.139 spdk_app_start is called in Round 1. 00:07:26.139 Shutdown signal received, stop current app iteration 00:07:26.139 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:26.139 spdk_app_start is called in Round 2. 00:07:26.139 Shutdown signal received, stop current app iteration 00:07:26.139 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:26.139 spdk_app_start is called in Round 3. 00:07:26.139 Shutdown signal received, stop current app iteration 00:07:26.139 17:28:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:26.139 17:28:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:26.139 00:07:26.139 real 0m18.575s 00:07:26.139 user 0m42.054s 00:07:26.139 sys 0m2.832s 00:07:26.139 17:28:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.139 17:28:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.139 ************************************ 00:07:26.139 END TEST app_repeat 00:07:26.139 ************************************ 00:07:26.139 17:28:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:26.139 17:28:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:26.139 17:28:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.139 17:28:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.139 17:28:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.139 ************************************ 00:07:26.139 START TEST cpu_locks 00:07:26.139 ************************************ 00:07:26.139 17:28:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:26.398 * Looking for test storage... 00:07:26.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:26.398 17:28:07 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.398 17:28:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.398 17:28:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.398 17:28:08 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.398 17:28:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:26.398 17:28:08 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.398 17:28:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.398 --rc genhtml_branch_coverage=1 00:07:26.398 --rc genhtml_function_coverage=1 00:07:26.398 --rc genhtml_legend=1 00:07:26.398 --rc geninfo_all_blocks=1 00:07:26.398 --rc geninfo_unexecuted_blocks=1 00:07:26.398 00:07:26.398 ' 00:07:26.398 17:28:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.398 --rc genhtml_branch_coverage=1 00:07:26.398 --rc genhtml_function_coverage=1 00:07:26.398 --rc genhtml_legend=1 00:07:26.399 --rc geninfo_all_blocks=1 00:07:26.399 --rc geninfo_unexecuted_blocks=1 00:07:26.399 00:07:26.399 ' 00:07:26.399 17:28:08 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.399 --rc genhtml_branch_coverage=1 00:07:26.399 --rc genhtml_function_coverage=1 00:07:26.399 --rc genhtml_legend=1 00:07:26.399 --rc geninfo_all_blocks=1 00:07:26.399 --rc geninfo_unexecuted_blocks=1 00:07:26.399 00:07:26.399 ' 00:07:26.399 17:28:08 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.399 --rc genhtml_branch_coverage=1 00:07:26.399 --rc genhtml_function_coverage=1 00:07:26.399 --rc genhtml_legend=1 00:07:26.399 --rc geninfo_all_blocks=1 00:07:26.399 --rc geninfo_unexecuted_blocks=1 00:07:26.399 00:07:26.399 ' 00:07:26.399 17:28:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:26.399 17:28:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:26.399 17:28:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:26.399 17:28:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:26.399 17:28:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.399 17:28:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.399 17:28:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.399 ************************************ 00:07:26.399 START TEST default_locks 00:07:26.399 ************************************ 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=73759 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 73759 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 73759 ']' 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.399 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.399 [2024-11-20 17:28:08.143782] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.399 [2024-11-20 17:28:08.143879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73759 ] 00:07:26.657 [2024-11-20 17:28:08.277572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.657 [2024-11-20 17:28:08.338873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.916 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.916 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:26.916 17:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 73759 00:07:26.916 17:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 73759 00:07:26.916 17:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 73759 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 73759 ']' 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 73759 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73759 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.176 killing process with pid 73759 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73759' 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 73759 00:07:27.176 17:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 73759 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 73759 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 73759 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 73759 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 73759 ']' 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.743 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (73759) - No such process 00:07:27.743 ERROR: process (pid: 73759) is no longer running 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:27.743 17:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:27.743 00:07:27.744 real 0m1.198s 00:07:27.744 user 0m1.156s 00:07:27.744 sys 0m0.459s 00:07:27.744 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.744 17:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.744 ************************************ 00:07:27.744 END TEST default_locks 00:07:27.744 ************************************ 00:07:27.744 17:28:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:27.744 17:28:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.744 17:28:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.744 17:28:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.744 ************************************ 00:07:27.744 START TEST default_locks_via_rpc 00:07:27.744 ************************************ 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=73816 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 73816 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 73816 ']' 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.744 17:28:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.744 [2024-11-20 17:28:09.394877] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.744 [2024-11-20 17:28:09.394994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73816 ] 00:07:28.003 [2024-11-20 17:28:09.532694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.003 [2024-11-20 17:28:09.593303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 73816 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 73816 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 73816 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 73816 ']' 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 73816 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.939 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73816 00:07:29.198 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.198 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.198 killing process with pid 73816 00:07:29.198 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73816' 00:07:29.198 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 73816 00:07:29.198 17:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 73816 00:07:29.457 00:07:29.457 real 0m1.784s 00:07:29.457 user 0m1.946s 00:07:29.457 sys 0m0.517s 00:07:29.457 17:28:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.457 17:28:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.457 ************************************ 00:07:29.457 END TEST default_locks_via_rpc 00:07:29.457 ************************************ 00:07:29.457 17:28:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:29.457 17:28:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.457 17:28:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.457 17:28:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.457 ************************************ 00:07:29.457 START TEST non_locking_app_on_locked_coremask 00:07:29.457 ************************************ 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=73875 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 73875 /var/tmp/spdk.sock 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 73875 ']' 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.457 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.457 [2024-11-20 17:28:11.221588] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:29.457 [2024-11-20 17:28:11.221664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73875 ] 00:07:29.716 [2024-11-20 17:28:11.351850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.716 [2024-11-20 17:28:11.418947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=73895 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 73895 /var/tmp/spdk2.sock 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 73895 ']' 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.975 17:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.975 [2024-11-20 17:28:11.723425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:29.975 [2024-11-20 17:28:11.723501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73895 ] 00:07:30.234 [2024-11-20 17:28:11.859091] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.234 [2024-11-20 17:28:11.859132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.234 [2024-11-20 17:28:11.995398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.169 17:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.169 17:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:31.169 17:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 73875 00:07:31.169 17:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.169 17:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73875 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 73875 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 73875 ']' 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 73875 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73875 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.104 killing process with pid 73875 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73875' 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 73875 00:07:32.104 17:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 73875 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 73895 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 73895 ']' 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 73895 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73895 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.671 killing process with pid 73895 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73895' 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 73895 00:07:32.671 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 73895 00:07:33.238 00:07:33.238 real 0m3.641s 00:07:33.238 user 0m3.929s 00:07:33.238 sys 0m1.182s 00:07:33.238 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.238 17:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.238 ************************************ 00:07:33.238 END TEST non_locking_app_on_locked_coremask 00:07:33.238 ************************************ 00:07:33.238 17:28:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:33.238 17:28:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.238 17:28:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.238 17:28:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.238 ************************************ 00:07:33.238 START TEST locking_app_on_unlocked_coremask 00:07:33.238 ************************************ 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=73974 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 73974 /var/tmp/spdk.sock 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 73974 ']' 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.238 17:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.238 [2024-11-20 17:28:14.929422] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.238 [2024-11-20 17:28:14.929522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73974 ] 00:07:33.497 [2024-11-20 17:28:15.064789] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.497 [2024-11-20 17:28:15.064825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.497 [2024-11-20 17:28:15.120920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.755 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.755 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.755 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=73989 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 73989 /var/tmp/spdk2.sock 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 73989 ']' 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.756 17:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.756 [2024-11-20 17:28:15.447763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.756 [2024-11-20 17:28:15.447860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73989 ] 00:07:34.015 [2024-11-20 17:28:15.588922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.015 [2024-11-20 17:28:15.715878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.949 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.949 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:34.949 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 73989 00:07:34.949 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73989 00:07:34.949 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.207 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 73974 00:07:35.207 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 73974 ']' 00:07:35.207 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 73974 00:07:35.207 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.207 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.207 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73974 00:07:35.466 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.466 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.466 killing process with pid 73974 00:07:35.466 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73974' 00:07:35.466 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 73974 00:07:35.466 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 73974 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 73989 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 73989 ']' 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 73989 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73989 00:07:36.032 killing process with pid 73989 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73989' 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 73989 00:07:36.032 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 73989 00:07:36.600 ************************************ 00:07:36.600 END TEST locking_app_on_unlocked_coremask 00:07:36.600 ************************************ 00:07:36.600 00:07:36.600 real 0m3.271s 00:07:36.600 user 0m3.541s 00:07:36.600 sys 0m1.013s 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.600 17:28:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:36.600 17:28:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.600 17:28:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.600 17:28:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.600 ************************************ 00:07:36.600 START TEST locking_app_on_locked_coremask 00:07:36.600 ************************************ 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74064 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 74064 /var/tmp/spdk.sock 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74064 ']' 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.600 17:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.600 [2024-11-20 17:28:18.264094] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:36.600 [2024-11-20 17:28:18.264387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74064 ] 00:07:36.858 [2024-11-20 17:28:18.403251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.858 [2024-11-20 17:28:18.469160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74092 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74092 /var/tmp/spdk2.sock 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74092 /var/tmp/spdk2.sock 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 74092 /var/tmp/spdk2.sock 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74092 ']' 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.793 17:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.793 [2024-11-20 17:28:19.315104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.793 [2024-11-20 17:28:19.315380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74092 ] 00:07:37.793 [2024-11-20 17:28:19.457225] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74064 has claimed it. 00:07:37.793 [2024-11-20 17:28:19.457285] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.360 ERROR: process (pid: 74092) is no longer running 00:07:38.360 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (74092) - No such process 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 74064 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74064 00:07:38.360 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 74064 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74064 ']' 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 74064 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74064 00:07:38.929 killing process with pid 74064 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74064' 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 74064 00:07:38.929 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 74064 00:07:39.187 ************************************ 00:07:39.187 END TEST locking_app_on_locked_coremask 00:07:39.187 ************************************ 00:07:39.187 00:07:39.187 real 0m2.677s 00:07:39.187 user 0m3.105s 00:07:39.187 sys 0m0.709s 00:07:39.187 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.187 17:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 17:28:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:39.188 17:28:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.188 17:28:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.188 17:28:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 ************************************ 00:07:39.188 START TEST locking_overlapped_coremask 00:07:39.188 ************************************ 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74149 00:07:39.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 74149 /var/tmp/spdk.sock 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 74149 ']' 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.188 17:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.446 [2024-11-20 17:28:21.003172] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.446 [2024-11-20 17:28:21.003300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74149 ] 00:07:39.446 [2024-11-20 17:28:21.143620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.446 [2024-11-20 17:28:21.197922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.446 [2024-11-20 17:28:21.198092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.446 [2024-11-20 17:28:21.198061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74179 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74179 /var/tmp/spdk2.sock 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74179 /var/tmp/spdk2.sock 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 74179 /var/tmp/spdk2.sock 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 74179 ']' 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.382 17:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.382 [2024-11-20 17:28:22.024958] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.382 [2024-11-20 17:28:22.025272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74179 ] 00:07:40.641 [2024-11-20 17:28:22.169131] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74149 has claimed it. 00:07:40.641 [2024-11-20 17:28:22.169208] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:41.207 ERROR: process (pid: 74179) is no longer running 00:07:41.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (74179) - No such process 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 74149 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 74149 ']' 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 74149 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74149 00:07:41.207 killing process with pid 74149 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74149' 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 74149 00:07:41.207 17:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 74149 00:07:41.465 ************************************ 00:07:41.465 END TEST locking_overlapped_coremask 00:07:41.465 ************************************ 00:07:41.465 00:07:41.465 real 0m2.226s 00:07:41.465 user 0m6.277s 00:07:41.465 sys 0m0.499s 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.465 17:28:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:41.465 17:28:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.465 17:28:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.465 17:28:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.465 ************************************ 00:07:41.465 START TEST locking_overlapped_coremask_via_rpc 00:07:41.465 ************************************ 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:41.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74225 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 74225 /var/tmp/spdk.sock 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74225 ']' 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.465 17:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.723 [2024-11-20 17:28:23.281951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.723 [2024-11-20 17:28:23.282288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74225 ] 00:07:41.723 [2024-11-20 17:28:23.423239] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.723 [2024-11-20 17:28:23.423515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.723 [2024-11-20 17:28:23.478250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.723 [2024-11-20 17:28:23.478387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.723 [2024-11-20 17:28:23.478397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74255 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 74255 /var/tmp/spdk2.sock 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74255 ']' 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.658 17:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.658 [2024-11-20 17:28:24.344362] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.658 [2024-11-20 17:28:24.344708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74255 ] 00:07:42.916 [2024-11-20 17:28:24.486574] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.916 [2024-11-20 17:28:24.486625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.916 [2024-11-20 17:28:24.666852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.916 [2024-11-20 17:28:24.666984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.916 [2024-11-20 17:28:24.666985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.851 [2024-11-20 17:28:25.390279] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74225 has claimed it. 00:07:43.851 2024/11/20 17:28:25 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:43.851 request: 00:07:43.851 { 00:07:43.851 "method": "framework_enable_cpumask_locks", 00:07:43.851 "params": {} 00:07:43.851 } 00:07:43.851 Got JSON-RPC error response 00:07:43.851 GoRPCClient: error on JSON-RPC call 00:07:43.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.851 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 74225 /var/tmp/spdk.sock 00:07:43.852 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74225 ']' 00:07:43.852 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.852 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.852 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.852 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.852 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 74255 /var/tmp/spdk2.sock 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74255 ']' 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.110 17:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.370 ************************************ 00:07:44.370 END TEST locking_overlapped_coremask_via_rpc 00:07:44.370 ************************************ 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.370 00:07:44.370 real 0m2.811s 00:07:44.370 user 0m1.485s 00:07:44.370 sys 0m0.250s 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.370 17:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.370 17:28:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:44.370 17:28:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74225 ]] 00:07:44.370 17:28:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74225 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74225 ']' 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74225 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74225 00:07:44.370 killing process with pid 74225 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74225' 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 74225 00:07:44.370 17:28:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 74225 00:07:44.937 17:28:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74255 ]] 00:07:44.937 17:28:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74255 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74255 ']' 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74255 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74255 00:07:44.937 killing process with pid 74255 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74255' 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 74255 00:07:44.937 17:28:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 74255 00:07:45.513 17:28:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.513 17:28:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:45.513 17:28:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74225 ]] 00:07:45.513 17:28:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74225 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74225 ']' 00:07:45.513 Process with pid 74225 is not found 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74225 00:07:45.513 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74225) - No such process 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 74225 is not found' 00:07:45.513 17:28:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74255 ]] 00:07:45.513 17:28:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74255 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74255 ']' 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74255 00:07:45.513 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74255) - No such process 00:07:45.513 Process with pid 74255 is not found 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 74255 is not found' 00:07:45.513 17:28:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.513 ************************************ 00:07:45.513 END TEST cpu_locks 00:07:45.513 ************************************ 00:07:45.513 00:07:45.513 real 0m19.132s 00:07:45.513 user 0m35.174s 00:07:45.513 sys 0m5.642s 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.513 17:28:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.513 ************************************ 00:07:45.513 END TEST event 00:07:45.513 ************************************ 00:07:45.513 00:07:45.513 real 0m47.397s 00:07:45.513 user 1m34.646s 00:07:45.513 sys 0m9.404s 00:07:45.513 17:28:27 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.513 17:28:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.513 17:28:27 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:45.513 17:28:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.513 17:28:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.513 17:28:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.513 ************************************ 00:07:45.513 START TEST thread 00:07:45.513 ************************************ 00:07:45.513 17:28:27 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:45.513 * Looking for test storage... 00:07:45.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:45.513 17:28:27 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:45.513 17:28:27 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:45.513 17:28:27 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:45.788 17:28:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.788 17:28:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.788 17:28:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.788 17:28:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.788 17:28:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.788 17:28:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.788 17:28:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.788 17:28:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.788 17:28:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.788 17:28:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.788 17:28:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.788 17:28:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:45.788 17:28:27 thread -- scripts/common.sh@345 -- # : 1 00:07:45.788 17:28:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.788 17:28:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.788 17:28:27 thread -- scripts/common.sh@365 -- # decimal 1 00:07:45.788 17:28:27 thread -- scripts/common.sh@353 -- # local d=1 00:07:45.788 17:28:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.788 17:28:27 thread -- scripts/common.sh@355 -- # echo 1 00:07:45.788 17:28:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.788 17:28:27 thread -- scripts/common.sh@366 -- # decimal 2 00:07:45.788 17:28:27 thread -- scripts/common.sh@353 -- # local d=2 00:07:45.788 17:28:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.788 17:28:27 thread -- scripts/common.sh@355 -- # echo 2 00:07:45.788 17:28:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.788 17:28:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.788 17:28:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.788 17:28:27 thread -- scripts/common.sh@368 -- # return 0 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.788 --rc genhtml_branch_coverage=1 00:07:45.788 --rc genhtml_function_coverage=1 00:07:45.788 --rc genhtml_legend=1 00:07:45.788 --rc geninfo_all_blocks=1 00:07:45.788 --rc geninfo_unexecuted_blocks=1 00:07:45.788 00:07:45.788 ' 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.788 --rc genhtml_branch_coverage=1 00:07:45.788 --rc genhtml_function_coverage=1 00:07:45.788 --rc genhtml_legend=1 00:07:45.788 --rc geninfo_all_blocks=1 00:07:45.788 --rc geninfo_unexecuted_blocks=1 00:07:45.788 00:07:45.788 ' 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.788 --rc genhtml_branch_coverage=1 00:07:45.788 --rc genhtml_function_coverage=1 00:07:45.788 --rc genhtml_legend=1 00:07:45.788 --rc geninfo_all_blocks=1 00:07:45.788 --rc geninfo_unexecuted_blocks=1 00:07:45.788 00:07:45.788 ' 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:45.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.788 --rc genhtml_branch_coverage=1 00:07:45.788 --rc genhtml_function_coverage=1 00:07:45.788 --rc genhtml_legend=1 00:07:45.788 --rc geninfo_all_blocks=1 00:07:45.788 --rc geninfo_unexecuted_blocks=1 00:07:45.788 00:07:45.788 ' 00:07:45.788 17:28:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.788 17:28:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.788 ************************************ 00:07:45.788 START TEST thread_poller_perf 00:07:45.788 ************************************ 00:07:45.788 17:28:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.788 [2024-11-20 17:28:27.351749] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.788 [2024-11-20 17:28:27.351952] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74415 ] 00:07:45.788 [2024-11-20 17:28:27.483343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.788 [2024-11-20 17:28:27.542285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.788 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:47.164 [2024-11-20T17:28:28.952Z] ====================================== 00:07:47.164 [2024-11-20T17:28:28.952Z] busy:2206124236 (cyc) 00:07:47.164 [2024-11-20T17:28:28.952Z] total_run_count: 402000 00:07:47.164 [2024-11-20T17:28:28.952Z] tsc_hz: 2200000000 (cyc) 00:07:47.164 [2024-11-20T17:28:28.952Z] ====================================== 00:07:47.164 [2024-11-20T17:28:28.952Z] poller_cost: 5487 (cyc), 2494 (nsec) 00:07:47.164 00:07:47.164 real 0m1.266s 00:07:47.164 user 0m1.105s 00:07:47.164 ************************************ 00:07:47.164 END TEST thread_poller_perf 00:07:47.164 ************************************ 00:07:47.164 sys 0m0.054s 00:07:47.164 17:28:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.164 17:28:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:47.164 17:28:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.164 17:28:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:47.164 17:28:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.164 17:28:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.164 ************************************ 00:07:47.164 START TEST thread_poller_perf 00:07:47.164 ************************************ 00:07:47.164 17:28:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.164 [2024-11-20 17:28:28.679398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:47.164 [2024-11-20 17:28:28.679510] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74445 ] 00:07:47.164 [2024-11-20 17:28:28.807782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.164 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:47.164 [2024-11-20 17:28:28.864196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.539 [2024-11-20T17:28:30.327Z] ====================================== 00:07:48.539 [2024-11-20T17:28:30.327Z] busy:2201528848 (cyc) 00:07:48.539 [2024-11-20T17:28:30.327Z] total_run_count: 5361000 00:07:48.539 [2024-11-20T17:28:30.327Z] tsc_hz: 2200000000 (cyc) 00:07:48.539 [2024-11-20T17:28:30.327Z] ====================================== 00:07:48.539 [2024-11-20T17:28:30.327Z] poller_cost: 410 (cyc), 186 (nsec) 00:07:48.539 ************************************ 00:07:48.539 END TEST thread_poller_perf 00:07:48.539 ************************************ 00:07:48.539 00:07:48.539 real 0m1.257s 00:07:48.539 user 0m1.099s 00:07:48.539 sys 0m0.053s 00:07:48.539 17:28:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.539 17:28:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:48.539 17:28:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:48.539 ************************************ 00:07:48.539 END TEST thread 00:07:48.539 ************************************ 00:07:48.539 00:07:48.539 real 0m2.829s 00:07:48.539 user 0m2.356s 00:07:48.539 sys 0m0.258s 00:07:48.539 17:28:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.539 17:28:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.539 17:28:30 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:48.539 17:28:30 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:48.539 17:28:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.539 17:28:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.539 17:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:48.539 ************************************ 00:07:48.539 START TEST app_cmdline 00:07:48.539 ************************************ 00:07:48.539 17:28:30 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:48.539 * Looking for test storage... 00:07:48.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:48.539 17:28:30 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:48.539 17:28:30 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:48.539 17:28:30 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:48.539 17:28:30 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:48.539 17:28:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.540 17:28:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:48.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.540 17:28:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.540 17:28:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.540 17:28:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.540 17:28:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:48.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.540 --rc genhtml_branch_coverage=1 00:07:48.540 --rc genhtml_function_coverage=1 00:07:48.540 --rc genhtml_legend=1 00:07:48.540 --rc geninfo_all_blocks=1 00:07:48.540 --rc geninfo_unexecuted_blocks=1 00:07:48.540 00:07:48.540 ' 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:48.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.540 --rc genhtml_branch_coverage=1 00:07:48.540 --rc genhtml_function_coverage=1 00:07:48.540 --rc genhtml_legend=1 00:07:48.540 --rc geninfo_all_blocks=1 00:07:48.540 --rc geninfo_unexecuted_blocks=1 00:07:48.540 00:07:48.540 ' 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:48.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.540 --rc genhtml_branch_coverage=1 00:07:48.540 --rc genhtml_function_coverage=1 00:07:48.540 --rc genhtml_legend=1 00:07:48.540 --rc geninfo_all_blocks=1 00:07:48.540 --rc geninfo_unexecuted_blocks=1 00:07:48.540 00:07:48.540 ' 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:48.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.540 --rc genhtml_branch_coverage=1 00:07:48.540 --rc genhtml_function_coverage=1 00:07:48.540 --rc genhtml_legend=1 00:07:48.540 --rc geninfo_all_blocks=1 00:07:48.540 --rc geninfo_unexecuted_blocks=1 00:07:48.540 00:07:48.540 ' 00:07:48.540 17:28:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:48.540 17:28:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74533 00:07:48.540 17:28:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74533 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 74533 ']' 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.540 17:28:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.540 17:28:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:48.540 [2024-11-20 17:28:30.288411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:48.540 [2024-11-20 17:28:30.288515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74533 ] 00:07:48.798 [2024-11-20 17:28:30.427624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.798 [2024-11-20 17:28:30.487383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.057 17:28:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.057 17:28:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:49.057 17:28:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:49.316 { 00:07:49.316 "fields": { 00:07:49.316 "commit": "b18e1bd62", 00:07:49.316 "major": 24, 00:07:49.316 "minor": 9, 00:07:49.316 "patch": 1, 00:07:49.316 "suffix": "-pre" 00:07:49.316 }, 00:07:49.316 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62" 00:07:49.316 } 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:49.316 17:28:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.316 17:28:31 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:49.317 17:28:31 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.575 2024/11/20 17:28:31 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:49.575 request: 00:07:49.575 { 00:07:49.575 "method": "env_dpdk_get_mem_stats", 00:07:49.575 "params": {} 00:07:49.575 } 00:07:49.575 Got JSON-RPC error response 00:07:49.575 GoRPCClient: error on JSON-RPC call 00:07:49.575 17:28:31 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:49.575 17:28:31 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.575 17:28:31 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.575 17:28:31 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.575 17:28:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74533 00:07:49.575 17:28:31 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 74533 ']' 00:07:49.575 17:28:31 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 74533 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74533 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.834 killing process with pid 74533 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74533' 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@969 -- # kill 74533 00:07:49.834 17:28:31 app_cmdline -- common/autotest_common.sh@974 -- # wait 74533 00:07:50.093 00:07:50.093 real 0m1.736s 00:07:50.093 user 0m2.090s 00:07:50.093 sys 0m0.514s 00:07:50.093 17:28:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.093 ************************************ 00:07:50.093 END TEST app_cmdline 00:07:50.093 ************************************ 00:07:50.093 17:28:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:50.093 17:28:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:50.093 17:28:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.093 17:28:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.093 17:28:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.093 ************************************ 00:07:50.093 START TEST version 00:07:50.093 ************************************ 00:07:50.093 17:28:31 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:50.352 * Looking for test storage... 00:07:50.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:50.352 17:28:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.352 17:28:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.352 17:28:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.352 17:28:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.352 17:28:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.352 17:28:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.352 17:28:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.352 17:28:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.352 17:28:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.352 17:28:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.352 17:28:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.352 17:28:31 version -- scripts/common.sh@344 -- # case "$op" in 00:07:50.352 17:28:31 version -- scripts/common.sh@345 -- # : 1 00:07:50.352 17:28:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.352 17:28:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.352 17:28:31 version -- scripts/common.sh@365 -- # decimal 1 00:07:50.352 17:28:31 version -- scripts/common.sh@353 -- # local d=1 00:07:50.352 17:28:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.352 17:28:31 version -- scripts/common.sh@355 -- # echo 1 00:07:50.352 17:28:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.352 17:28:31 version -- scripts/common.sh@366 -- # decimal 2 00:07:50.352 17:28:31 version -- scripts/common.sh@353 -- # local d=2 00:07:50.352 17:28:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.352 17:28:31 version -- scripts/common.sh@355 -- # echo 2 00:07:50.352 17:28:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.352 17:28:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.352 17:28:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.352 17:28:31 version -- scripts/common.sh@368 -- # return 0 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.352 --rc genhtml_branch_coverage=1 00:07:50.352 --rc genhtml_function_coverage=1 00:07:50.352 --rc genhtml_legend=1 00:07:50.352 --rc geninfo_all_blocks=1 00:07:50.352 --rc geninfo_unexecuted_blocks=1 00:07:50.352 00:07:50.352 ' 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.352 --rc genhtml_branch_coverage=1 00:07:50.352 --rc genhtml_function_coverage=1 00:07:50.352 --rc genhtml_legend=1 00:07:50.352 --rc geninfo_all_blocks=1 00:07:50.352 --rc geninfo_unexecuted_blocks=1 00:07:50.352 00:07:50.352 ' 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.352 --rc genhtml_branch_coverage=1 00:07:50.352 --rc genhtml_function_coverage=1 00:07:50.352 --rc genhtml_legend=1 00:07:50.352 --rc geninfo_all_blocks=1 00:07:50.352 --rc geninfo_unexecuted_blocks=1 00:07:50.352 00:07:50.352 ' 00:07:50.352 17:28:31 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.352 --rc genhtml_branch_coverage=1 00:07:50.352 --rc genhtml_function_coverage=1 00:07:50.352 --rc genhtml_legend=1 00:07:50.352 --rc geninfo_all_blocks=1 00:07:50.352 --rc geninfo_unexecuted_blocks=1 00:07:50.352 00:07:50.352 ' 00:07:50.352 17:28:32 version -- app/version.sh@17 -- # get_header_version major 00:07:50.352 17:28:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.352 17:28:32 version -- app/version.sh@14 -- # cut -f2 00:07:50.352 17:28:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.352 17:28:32 version -- app/version.sh@17 -- # major=24 00:07:50.352 17:28:32 version -- app/version.sh@18 -- # get_header_version minor 00:07:50.353 17:28:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.353 17:28:32 version -- app/version.sh@14 -- # cut -f2 00:07:50.353 17:28:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.353 17:28:32 version -- app/version.sh@18 -- # minor=9 00:07:50.353 17:28:32 version -- app/version.sh@19 -- # get_header_version patch 00:07:50.353 17:28:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.353 17:28:32 version -- app/version.sh@14 -- # cut -f2 00:07:50.353 17:28:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.353 17:28:32 version -- app/version.sh@19 -- # patch=1 00:07:50.353 17:28:32 version -- app/version.sh@20 -- # get_header_version suffix 00:07:50.353 17:28:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.353 17:28:32 version -- app/version.sh@14 -- # cut -f2 00:07:50.353 17:28:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.353 17:28:32 version -- app/version.sh@20 -- # suffix=-pre 00:07:50.353 17:28:32 version -- app/version.sh@22 -- # version=24.9 00:07:50.353 17:28:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:50.353 17:28:32 version -- app/version.sh@25 -- # version=24.9.1 00:07:50.353 17:28:32 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:50.353 17:28:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:50.353 17:28:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:50.353 17:28:32 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:50.353 17:28:32 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:50.353 00:07:50.353 real 0m0.264s 00:07:50.353 user 0m0.176s 00:07:50.353 sys 0m0.128s 00:07:50.353 17:28:32 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.353 17:28:32 version -- common/autotest_common.sh@10 -- # set +x 00:07:50.353 ************************************ 00:07:50.353 END TEST version 00:07:50.353 ************************************ 00:07:50.353 17:28:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:50.353 17:28:32 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:50.353 17:28:32 -- spdk/autotest.sh@194 -- # uname -s 00:07:50.353 17:28:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:50.353 17:28:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:50.353 17:28:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:50.353 17:28:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:50.353 17:28:32 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:50.353 17:28:32 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:50.353 17:28:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.353 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:07:50.611 17:28:32 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:50.611 17:28:32 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:50.611 17:28:32 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:50.611 17:28:32 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:50.611 17:28:32 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:50.611 17:28:32 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:50.611 17:28:32 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.611 17:28:32 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.611 17:28:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.611 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:07:50.611 ************************************ 00:07:50.611 START TEST nvmf_tcp 00:07:50.611 ************************************ 00:07:50.611 17:28:32 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.611 * Looking for test storage... 00:07:50.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:50.611 17:28:32 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:50.611 17:28:32 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:50.611 17:28:32 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:50.611 17:28:32 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.611 17:28:32 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:50.611 17:28:32 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.611 17:28:32 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.611 --rc genhtml_branch_coverage=1 00:07:50.611 --rc genhtml_function_coverage=1 00:07:50.611 --rc genhtml_legend=1 00:07:50.612 --rc geninfo_all_blocks=1 00:07:50.612 --rc geninfo_unexecuted_blocks=1 00:07:50.612 00:07:50.612 ' 00:07:50.612 17:28:32 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:50.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.612 --rc genhtml_branch_coverage=1 00:07:50.612 --rc genhtml_function_coverage=1 00:07:50.612 --rc genhtml_legend=1 00:07:50.612 --rc geninfo_all_blocks=1 00:07:50.612 --rc geninfo_unexecuted_blocks=1 00:07:50.612 00:07:50.612 ' 00:07:50.612 17:28:32 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:50.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.612 --rc genhtml_branch_coverage=1 00:07:50.612 --rc genhtml_function_coverage=1 00:07:50.612 --rc genhtml_legend=1 00:07:50.612 --rc geninfo_all_blocks=1 00:07:50.612 --rc geninfo_unexecuted_blocks=1 00:07:50.612 00:07:50.612 ' 00:07:50.612 17:28:32 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:50.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.612 --rc genhtml_branch_coverage=1 00:07:50.612 --rc genhtml_function_coverage=1 00:07:50.612 --rc genhtml_legend=1 00:07:50.612 --rc geninfo_all_blocks=1 00:07:50.612 --rc geninfo_unexecuted_blocks=1 00:07:50.612 00:07:50.612 ' 00:07:50.612 17:28:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:50.612 17:28:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.612 17:28:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:50.612 17:28:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.612 17:28:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.612 17:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.612 ************************************ 00:07:50.612 START TEST nvmf_target_core 00:07:50.612 ************************************ 00:07:50.612 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:50.871 * Looking for test storage... 00:07:50.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.871 --rc genhtml_branch_coverage=1 00:07:50.871 --rc genhtml_function_coverage=1 00:07:50.871 --rc genhtml_legend=1 00:07:50.871 --rc geninfo_all_blocks=1 00:07:50.871 --rc geninfo_unexecuted_blocks=1 00:07:50.871 00:07:50.871 ' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.871 --rc genhtml_branch_coverage=1 00:07:50.871 --rc genhtml_function_coverage=1 00:07:50.871 --rc genhtml_legend=1 00:07:50.871 --rc geninfo_all_blocks=1 00:07:50.871 --rc geninfo_unexecuted_blocks=1 00:07:50.871 00:07:50.871 ' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.871 --rc genhtml_branch_coverage=1 00:07:50.871 --rc genhtml_function_coverage=1 00:07:50.871 --rc genhtml_legend=1 00:07:50.871 --rc geninfo_all_blocks=1 00:07:50.871 --rc geninfo_unexecuted_blocks=1 00:07:50.871 00:07:50.871 ' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:50.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.871 --rc genhtml_branch_coverage=1 00:07:50.871 --rc genhtml_function_coverage=1 00:07:50.871 --rc genhtml_legend=1 00:07:50.871 --rc geninfo_all_blocks=1 00:07:50.871 --rc geninfo_unexecuted_blocks=1 00:07:50.871 00:07:50.871 ' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:50.871 17:28:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.872 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.872 ************************************ 00:07:50.872 START TEST nvmf_abort 00:07:50.872 ************************************ 00:07:50.872 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:51.132 * Looking for test storage... 00:07:51.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.132 --rc genhtml_branch_coverage=1 00:07:51.132 --rc genhtml_function_coverage=1 00:07:51.132 --rc genhtml_legend=1 00:07:51.132 --rc geninfo_all_blocks=1 00:07:51.132 --rc geninfo_unexecuted_blocks=1 00:07:51.132 00:07:51.132 ' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.132 --rc genhtml_branch_coverage=1 00:07:51.132 --rc genhtml_function_coverage=1 00:07:51.132 --rc genhtml_legend=1 00:07:51.132 --rc geninfo_all_blocks=1 00:07:51.132 --rc geninfo_unexecuted_blocks=1 00:07:51.132 00:07:51.132 ' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.132 --rc genhtml_branch_coverage=1 00:07:51.132 --rc genhtml_function_coverage=1 00:07:51.132 --rc genhtml_legend=1 00:07:51.132 --rc geninfo_all_blocks=1 00:07:51.132 --rc geninfo_unexecuted_blocks=1 00:07:51.132 00:07:51.132 ' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.132 --rc genhtml_branch_coverage=1 00:07:51.132 --rc genhtml_function_coverage=1 00:07:51.132 --rc genhtml_legend=1 00:07:51.132 --rc geninfo_all_blocks=1 00:07:51.132 --rc geninfo_unexecuted_blocks=1 00:07:51.132 00:07:51.132 ' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.132 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:51.133 Cannot find device "nvmf_init_br" 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:51.133 Cannot find device "nvmf_init_br2" 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:51.133 Cannot find device "nvmf_tgt_br" 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.133 Cannot find device "nvmf_tgt_br2" 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:51.133 Cannot find device "nvmf_init_br" 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:51.133 Cannot find device "nvmf_init_br2" 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:51.133 Cannot find device "nvmf_tgt_br" 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:07:51.133 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:51.392 Cannot find device "nvmf_tgt_br2" 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:51.392 Cannot find device "nvmf_br" 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:51.392 Cannot find device "nvmf_init_if" 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:51.392 Cannot find device "nvmf_init_if2" 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:51.392 17:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:51.392 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:51.651 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:51.651 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:07:51.651 00:07:51.651 --- 10.0.0.3 ping statistics --- 00:07:51.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.651 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:51.651 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:51.651 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:07:51.651 00:07:51.651 --- 10.0.0.4 ping statistics --- 00:07:51.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.651 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:51.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:51.651 00:07:51.651 --- 10.0.0.1 ping statistics --- 00:07:51.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.651 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:51.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:07:51.651 00:07:51.651 --- 10.0.0.2 ping statistics --- 00:07:51.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.651 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # return 0 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=74961 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 74961 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 74961 ']' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.651 17:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.651 [2024-11-20 17:28:33.390165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.651 [2024-11-20 17:28:33.390251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.910 [2024-11-20 17:28:33.535698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.910 [2024-11-20 17:28:33.625098] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.910 [2024-11-20 17:28:33.625184] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.910 [2024-11-20 17:28:33.625200] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.910 [2024-11-20 17:28:33.625212] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.910 [2024-11-20 17:28:33.625223] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.910 [2024-11-20 17:28:33.626515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.910 [2024-11-20 17:28:33.626675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.910 [2024-11-20 17:28:33.626686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 [2024-11-20 17:28:34.477285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 Malloc0 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 Delay0 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 [2024-11-20 17:28:34.566387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 17:28:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:53.104 [2024-11-20 17:28:34.752826] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:55.007 Initializing NVMe Controllers 00:07:55.007 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:55.007 controller IO queue size 128 less than required 00:07:55.007 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:55.007 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:55.007 Initialization complete. Launching workers. 00:07:55.007 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33228 00:07:55.007 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33293, failed to submit 62 00:07:55.007 success 33232, unsuccessful 61, failed 0 00:07:55.007 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.007 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.007 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:55.007 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.007 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:55.266 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:55.266 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:55.266 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:55.266 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.267 rmmod nvme_tcp 00:07:55.267 rmmod nvme_fabrics 00:07:55.267 rmmod nvme_keyring 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 74961 ']' 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 74961 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 74961 ']' 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 74961 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74961 00:07:55.267 killing process with pid 74961 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74961' 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 74961 00:07:55.267 17:28:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 74961 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:55.526 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:07:55.784 00:07:55.784 real 0m4.872s 00:07:55.784 user 0m12.834s 00:07:55.784 sys 0m1.218s 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:55.784 ************************************ 00:07:55.784 END TEST nvmf_abort 00:07:55.784 ************************************ 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.784 ************************************ 00:07:55.784 START TEST nvmf_ns_hotplug_stress 00:07:55.784 ************************************ 00:07:55.784 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:56.044 * Looking for test storage... 00:07:56.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:56.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.044 --rc genhtml_branch_coverage=1 00:07:56.044 --rc genhtml_function_coverage=1 00:07:56.044 --rc genhtml_legend=1 00:07:56.044 --rc geninfo_all_blocks=1 00:07:56.044 --rc geninfo_unexecuted_blocks=1 00:07:56.044 00:07:56.044 ' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:56.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.044 --rc genhtml_branch_coverage=1 00:07:56.044 --rc genhtml_function_coverage=1 00:07:56.044 --rc genhtml_legend=1 00:07:56.044 --rc geninfo_all_blocks=1 00:07:56.044 --rc geninfo_unexecuted_blocks=1 00:07:56.044 00:07:56.044 ' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:56.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.044 --rc genhtml_branch_coverage=1 00:07:56.044 --rc genhtml_function_coverage=1 00:07:56.044 --rc genhtml_legend=1 00:07:56.044 --rc geninfo_all_blocks=1 00:07:56.044 --rc geninfo_unexecuted_blocks=1 00:07:56.044 00:07:56.044 ' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:56.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.044 --rc genhtml_branch_coverage=1 00:07:56.044 --rc genhtml_function_coverage=1 00:07:56.044 --rc genhtml_legend=1 00:07:56.044 --rc geninfo_all_blocks=1 00:07:56.044 --rc geninfo_unexecuted_blocks=1 00:07:56.044 00:07:56.044 ' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.044 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:56.045 Cannot find device "nvmf_init_br" 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:56.045 Cannot find device "nvmf_init_br2" 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:56.045 Cannot find device "nvmf_tgt_br" 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.045 Cannot find device "nvmf_tgt_br2" 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:07:56.045 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:56.304 Cannot find device "nvmf_init_br" 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:56.304 Cannot find device "nvmf_init_br2" 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:56.304 Cannot find device "nvmf_tgt_br" 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:56.304 Cannot find device "nvmf_tgt_br2" 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:56.304 Cannot find device "nvmf_br" 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:56.304 Cannot find device "nvmf_init_if" 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:56.304 Cannot find device "nvmf_init_if2" 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:56.304 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:56.304 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:56.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:56.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:07:56.563 00:07:56.563 --- 10.0.0.3 ping statistics --- 00:07:56.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.563 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:56.563 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:56.563 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:07:56.563 00:07:56.563 --- 10.0.0.4 ping statistics --- 00:07:56.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.563 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:56.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:56.563 00:07:56.563 --- 10.0.0.1 ping statistics --- 00:07:56.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.563 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:56.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:56.563 00:07:56.563 --- 10.0.0.2 ping statistics --- 00:07:56.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.563 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # return 0 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=75291 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 75291 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 75291 ']' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.563 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.563 [2024-11-20 17:28:38.229947] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.563 [2024-11-20 17:28:38.230030] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.822 [2024-11-20 17:28:38.371979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.822 [2024-11-20 17:28:38.451790] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.822 [2024-11-20 17:28:38.451862] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.822 [2024-11-20 17:28:38.451872] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.822 [2024-11-20 17:28:38.451880] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.822 [2024-11-20 17:28:38.451887] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.822 [2024-11-20 17:28:38.452059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.822 [2024-11-20 17:28:38.452209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.822 [2024-11-20 17:28:38.452218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.822 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.822 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:56.822 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:56.822 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.822 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:57.081 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.081 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:57.081 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:57.339 [2024-11-20 17:28:38.938569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.339 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:57.599 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:57.858 [2024-11-20 17:28:39.483267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:57.858 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:58.116 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:58.375 Malloc0 00:07:58.375 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:58.375 Delay0 00:07:58.633 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.633 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:58.894 NULL1 00:07:58.894 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:59.153 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=75408 00:07:59.153 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:07:59.153 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.153 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:00.530 Read completed with error (sct=0, sc=11) 00:08:00.530 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.789 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:00.789 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:00.789 true 00:08:01.048 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:01.048 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.616 17:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.875 17:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:01.875 17:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:02.133 true 00:08:02.134 17:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:02.134 17:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.392 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.651 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:02.651 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:02.910 true 00:08:02.910 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:02.910 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.846 17:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.846 17:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:03.846 17:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:04.104 true 00:08:04.104 17:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:04.104 17:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.363 17:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.622 17:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:04.622 17:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:04.880 true 00:08:04.880 17:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:04.880 17:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.815 17:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.073 17:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:06.073 17:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:06.073 true 00:08:06.073 17:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:06.073 17:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.332 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.590 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:06.590 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:06.848 true 00:08:06.848 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:06.848 17:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.783 17:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.041 17:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:08.041 17:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:08.299 true 00:08:08.299 17:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:08.299 17:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.557 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.557 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:08.557 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:08.816 true 00:08:08.816 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:08.816 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.752 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.010 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:10.010 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:10.269 true 00:08:10.269 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:10.269 17:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.528 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.787 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:10.787 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:11.045 true 00:08:11.045 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:11.045 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.980 17:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.980 17:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:11.980 17:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:12.238 true 00:08:12.238 17:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:12.238 17:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.497 17:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.755 17:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:12.755 17:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:13.021 true 00:08:13.021 17:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:13.021 17:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.321 17:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.597 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:13.597 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:13.597 true 00:08:13.597 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:13.597 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 17:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.979 17:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:14.979 17:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:15.238 true 00:08:15.238 17:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:15.238 17:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.175 17:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.433 17:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:16.433 17:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:16.692 true 00:08:16.692 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:16.692 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.951 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.951 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:16.951 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:17.210 true 00:08:17.210 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:17.210 17:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.144 17:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.403 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:18.403 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:18.662 true 00:08:18.662 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:18.662 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.921 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.180 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:19.180 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:19.439 true 00:08:19.439 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:19.439 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.698 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.698 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:19.698 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:20.265 true 00:08:20.265 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:20.265 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.201 17:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.460 17:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:21.460 17:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:21.718 true 00:08:21.718 17:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:21.718 17:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.977 17:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.235 17:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:22.235 17:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:22.495 true 00:08:22.495 17:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:22.495 17:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.495 17:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.062 17:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:23.062 17:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:23.062 true 00:08:23.062 17:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:23.062 17:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.439 17:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.439 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:24.439 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:24.439 true 00:08:24.698 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:24.698 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.698 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.957 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:24.957 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:25.215 true 00:08:25.215 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:25.215 17:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.150 17:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.409 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:26.409 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:26.668 true 00:08:26.668 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:26.668 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.927 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.185 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:27.185 17:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:27.444 true 00:08:27.444 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:27.444 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.702 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.961 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:27.961 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:28.220 true 00:08:28.220 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:28.220 17:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.155 17:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.414 Initializing NVMe Controllers 00:08:29.414 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.414 Controller IO queue size 128, less than required. 00:08:29.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.414 Controller IO queue size 128, less than required. 00:08:29.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.414 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:29.414 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:29.414 Initialization complete. Launching workers. 00:08:29.414 ======================================================== 00:08:29.414 Latency(us) 00:08:29.414 Device Information : IOPS MiB/s Average min max 00:08:29.415 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 662.31 0.32 99161.66 2993.34 1044588.13 00:08:29.415 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12878.36 6.29 9939.11 2860.62 562149.44 00:08:29.415 ======================================================== 00:08:29.415 Total : 13540.68 6.61 14303.23 2860.62 1044588.13 00:08:29.415 00:08:29.415 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:29.415 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:29.673 true 00:08:29.673 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75408 00:08:29.673 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (75408) - No such process 00:08:29.673 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 75408 00:08:29.673 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.931 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.189 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:30.189 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:30.189 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:30.189 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.189 17:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:30.448 null0 00:08:30.448 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.448 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.448 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:30.707 null1 00:08:30.707 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.707 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.707 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:30.966 null2 00:08:30.966 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.966 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.966 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:31.224 null3 00:08:31.224 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.224 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.224 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:31.224 null4 00:08:31.224 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.224 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.224 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:31.483 null5 00:08:31.483 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.483 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.483 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:31.742 null6 00:08:31.742 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.742 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.742 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:32.001 null7 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.001 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 76439 76441 76442 76443 76445 76447 76449 76453 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.002 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.260 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.261 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.261 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.261 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.261 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.261 17:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.261 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.519 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.778 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.036 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.294 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.294 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.294 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.294 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.294 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.295 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.295 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.295 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.295 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.295 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.295 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.295 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.295 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.295 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.553 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.812 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.070 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.329 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.329 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.329 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.329 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.588 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.848 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.107 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.366 17:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.366 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.625 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.884 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.143 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.144 17:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.403 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.662 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.921 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.922 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.181 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.440 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.440 17:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.440 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.698 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.699 rmmod nvme_tcp 00:08:37.699 rmmod nvme_fabrics 00:08:37.699 rmmod nvme_keyring 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 75291 ']' 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 75291 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 75291 ']' 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 75291 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75291 00:08:37.699 killing process with pid 75291 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75291' 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 75291 00:08:37.699 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 75291 00:08:37.956 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:37.956 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:37.956 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:37.956 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:37.956 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:37.957 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:08:38.216 00:08:38.216 real 0m42.330s 00:08:38.216 user 3m23.506s 00:08:38.216 sys 0m11.849s 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.216 ************************************ 00:08:38.216 END TEST nvmf_ns_hotplug_stress 00:08:38.216 ************************************ 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.216 ************************************ 00:08:38.216 START TEST nvmf_delete_subsystem 00:08:38.216 ************************************ 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:38.216 * Looking for test storage... 00:08:38.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.216 17:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.476 --rc genhtml_branch_coverage=1 00:08:38.476 --rc genhtml_function_coverage=1 00:08:38.476 --rc genhtml_legend=1 00:08:38.476 --rc geninfo_all_blocks=1 00:08:38.476 --rc geninfo_unexecuted_blocks=1 00:08:38.476 00:08:38.476 ' 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.476 --rc genhtml_branch_coverage=1 00:08:38.476 --rc genhtml_function_coverage=1 00:08:38.476 --rc genhtml_legend=1 00:08:38.476 --rc geninfo_all_blocks=1 00:08:38.476 --rc geninfo_unexecuted_blocks=1 00:08:38.476 00:08:38.476 ' 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.476 --rc genhtml_branch_coverage=1 00:08:38.476 --rc genhtml_function_coverage=1 00:08:38.476 --rc genhtml_legend=1 00:08:38.476 --rc geninfo_all_blocks=1 00:08:38.476 --rc geninfo_unexecuted_blocks=1 00:08:38.476 00:08:38.476 ' 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.476 --rc genhtml_branch_coverage=1 00:08:38.476 --rc genhtml_function_coverage=1 00:08:38.476 --rc genhtml_legend=1 00:08:38.476 --rc geninfo_all_blocks=1 00:08:38.476 --rc geninfo_unexecuted_blocks=1 00:08:38.476 00:08:38.476 ' 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.476 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:38.477 Cannot find device "nvmf_init_br" 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:38.477 Cannot find device "nvmf_init_br2" 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:38.477 Cannot find device "nvmf_tgt_br" 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.477 Cannot find device "nvmf_tgt_br2" 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:38.477 Cannot find device "nvmf_init_br" 00:08:38.477 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:38.478 Cannot find device "nvmf_init_br2" 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:38.478 Cannot find device "nvmf_tgt_br" 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:38.478 Cannot find device "nvmf_tgt_br2" 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:38.478 Cannot find device "nvmf_br" 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:38.478 Cannot find device "nvmf_init_if" 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:38.478 Cannot find device "nvmf_init_if2" 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.478 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.737 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:38.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:08:38.738 00:08:38.738 --- 10.0.0.3 ping statistics --- 00:08:38.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.738 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:38.738 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:38.738 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:08:38.738 00:08:38.738 --- 10.0.0.4 ping statistics --- 00:08:38.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.738 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:38.738 00:08:38.738 --- 10.0.0.1 ping statistics --- 00:08:38.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.738 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:38.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:08:38.738 00:08:38.738 --- 10.0.0.2 ping statistics --- 00:08:38.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.738 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # return 0 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:38.738 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=77825 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 77825 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 77825 ']' 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.998 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.998 [2024-11-20 17:29:20.598550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.998 [2024-11-20 17:29:20.598637] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.998 [2024-11-20 17:29:20.742640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:39.256 [2024-11-20 17:29:20.808031] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.257 [2024-11-20 17:29:20.808106] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.257 [2024-11-20 17:29:20.808121] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.257 [2024-11-20 17:29:20.808132] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.257 [2024-11-20 17:29:20.808142] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.257 [2024-11-20 17:29:20.808280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.257 [2024-11-20 17:29:20.808510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.831 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:39.831 [2024-11-20 17:29:21.611575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.112 [2024-11-20 17:29:21.627652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.112 NULL1 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.112 Delay0 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=77876 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:40.112 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.112 [2024-11-20 17:29:21.812404] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:42.046 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.046 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.046 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 starting I/O failed: -6 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 [2024-11-20 17:29:23.854859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x518f50 is same with the state(6) to be set 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Write completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.305 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 starting I/O failed: -6 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 Read completed with error (sct=0, sc=8) 00:08:42.306 Write completed with error (sct=0, sc=8) 00:08:42.306 starting I/O failed: -6 00:08:42.306 [2024-11-20 17:29:23.858373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4c4000c00 is same with the state(6) to be set 00:08:42.306 starting I/O failed: -6 00:08:42.306 starting I/O failed: -6 00:08:42.306 starting I/O failed: -6 00:08:42.306 starting I/O failed: -6 00:08:43.242 [2024-11-20 17:29:24.825665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x518b90 is same with the state(6) to be set 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 [2024-11-20 17:29:24.855254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4c400d7c0 is same with the state(6) to be set 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 [2024-11-20 17:29:24.855799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4c400cfe0 is same with the state(6) to be set 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Read completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.242 Write completed with error (sct=0, sc=8) 00:08:43.243 [2024-11-20 17:29:24.856816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x518d70 is same with the state(6) to be set 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Read completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 Write completed with error (sct=0, sc=8) 00:08:43.243 [2024-11-20 17:29:24.857373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fb390 is same with the state(6) to be set 00:08:43.243 Initializing NVMe Controllers 00:08:43.243 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.243 Controller IO queue size 128, less than required. 00:08:43.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.243 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.243 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.243 Initialization complete. Launching workers. 00:08:43.243 ======================================================== 00:08:43.243 Latency(us) 00:08:43.243 Device Information : IOPS MiB/s Average min max 00:08:43.243 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.79 0.08 927849.08 515.07 2003779.98 00:08:43.243 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.15 0.09 917254.23 1781.95 1017723.12 00:08:43.243 ======================================================== 00:08:43.243 Total : 347.94 0.17 922333.13 515.07 2003779.98 00:08:43.243 00:08:43.243 [2024-11-20 17:29:24.857786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x518b90 (9): Bad file descriptor 00:08:43.243 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:43.243 17:29:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.243 17:29:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:43.243 17:29:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 77876 00:08:43.243 17:29:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 77876 00:08:43.811 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (77876) - No such process 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 77876 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 77876 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 77876 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.811 [2024-11-20 17:29:25.383457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=77922 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:43.811 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.811 [2024-11-20 17:29:25.564298] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:44.378 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.378 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:44.378 17:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.637 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.637 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:44.637 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.229 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.229 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:45.229 17:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.796 17:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.796 17:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:45.796 17:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.364 17:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.364 17:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:46.364 17:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.931 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.931 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:46.931 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.931 Initializing NVMe Controllers 00:08:46.931 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:46.931 Controller IO queue size 128, less than required. 00:08:46.931 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.931 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:46.931 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:46.931 Initialization complete. Launching workers. 00:08:46.931 ======================================================== 00:08:46.931 Latency(us) 00:08:46.931 Device Information : IOPS MiB/s Average min max 00:08:46.931 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004538.48 1000147.82 1042602.86 00:08:46.931 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007576.42 1000393.97 1041441.28 00:08:46.931 ======================================================== 00:08:46.931 Total : 256.00 0.12 1006057.45 1000147.82 1042602.86 00:08:46.931 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77922 00:08:47.191 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (77922) - No such process 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 77922 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.191 17:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.191 rmmod nvme_tcp 00:08:47.450 rmmod nvme_fabrics 00:08:47.450 rmmod nvme_keyring 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 77825 ']' 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 77825 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 77825 ']' 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 77825 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77825 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.450 killing process with pid 77825 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77825' 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 77825 00:08:47.450 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 77825 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:47.708 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:08:47.709 00:08:47.709 real 0m9.558s 00:08:47.709 user 0m29.159s 00:08:47.709 sys 0m1.256s 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.709 ************************************ 00:08:47.709 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.709 END TEST nvmf_delete_subsystem 00:08:47.709 ************************************ 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.968 ************************************ 00:08:47.968 START TEST nvmf_host_management 00:08:47.968 ************************************ 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:47.968 * Looking for test storage... 00:08:47.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.968 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:47.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.969 --rc genhtml_branch_coverage=1 00:08:47.969 --rc genhtml_function_coverage=1 00:08:47.969 --rc genhtml_legend=1 00:08:47.969 --rc geninfo_all_blocks=1 00:08:47.969 --rc geninfo_unexecuted_blocks=1 00:08:47.969 00:08:47.969 ' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:47.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.969 --rc genhtml_branch_coverage=1 00:08:47.969 --rc genhtml_function_coverage=1 00:08:47.969 --rc genhtml_legend=1 00:08:47.969 --rc geninfo_all_blocks=1 00:08:47.969 --rc geninfo_unexecuted_blocks=1 00:08:47.969 00:08:47.969 ' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:47.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.969 --rc genhtml_branch_coverage=1 00:08:47.969 --rc genhtml_function_coverage=1 00:08:47.969 --rc genhtml_legend=1 00:08:47.969 --rc geninfo_all_blocks=1 00:08:47.969 --rc geninfo_unexecuted_blocks=1 00:08:47.969 00:08:47.969 ' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:47.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.969 --rc genhtml_branch_coverage=1 00:08:47.969 --rc genhtml_function_coverage=1 00:08:47.969 --rc genhtml_legend=1 00:08:47.969 --rc geninfo_all_blocks=1 00:08:47.969 --rc geninfo_unexecuted_blocks=1 00:08:47.969 00:08:47.969 ' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.969 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.969 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:47.970 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:47.970 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:47.970 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:47.970 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:47.970 Cannot find device "nvmf_init_br" 00:08:47.970 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:47.970 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:48.229 Cannot find device "nvmf_init_br2" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:48.229 Cannot find device "nvmf_tgt_br" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.229 Cannot find device "nvmf_tgt_br2" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:48.229 Cannot find device "nvmf_init_br" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:48.229 Cannot find device "nvmf_init_br2" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:48.229 Cannot find device "nvmf_tgt_br" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:48.229 Cannot find device "nvmf_tgt_br2" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:48.229 Cannot find device "nvmf_br" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:48.229 Cannot find device "nvmf_init_if" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:48.229 Cannot find device "nvmf_init_if2" 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:48.229 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:48.229 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:48.229 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.229 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:48.488 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.488 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:08:48.488 00:08:48.488 --- 10.0.0.3 ping statistics --- 00:08:48.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.488 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:48.488 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:48.488 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:08:48.488 00:08:48.488 --- 10.0.0.4 ping statistics --- 00:08:48.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.488 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:48.488 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:48.488 00:08:48.488 --- 10.0.0.1 ping statistics --- 00:08:48.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.488 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:48.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:08:48.489 00:08:48.489 --- 10.0.0.2 ping statistics --- 00:08:48.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.489 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=78215 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 78215 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 78215 ']' 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.489 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.489 [2024-11-20 17:29:30.245578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:48.489 [2024-11-20 17:29:30.245664] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.748 [2024-11-20 17:29:30.390490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.748 [2024-11-20 17:29:30.464949] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.748 [2024-11-20 17:29:30.465016] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.748 [2024-11-20 17:29:30.465031] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.748 [2024-11-20 17:29:30.465043] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.748 [2024-11-20 17:29:30.465053] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.748 [2024-11-20 17:29:30.465198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.748 [2024-11-20 17:29:30.465351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.748 [2024-11-20 17:29:30.465508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:48.748 [2024-11-20 17:29:30.465514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.007 [2024-11-20 17:29:30.661209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.007 Malloc0 00:08:49.007 [2024-11-20 17:29:30.725947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=78268 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 78268 /var/tmp/bdevperf.sock 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 78268 ']' 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:49.007 { 00:08:49.007 "params": { 00:08:49.007 "name": "Nvme$subsystem", 00:08:49.007 "trtype": "$TEST_TRANSPORT", 00:08:49.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.007 "adrfam": "ipv4", 00:08:49.007 "trsvcid": "$NVMF_PORT", 00:08:49.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.007 "hdgst": ${hdgst:-false}, 00:08:49.007 "ddgst": ${ddgst:-false} 00:08:49.007 }, 00:08:49.007 "method": "bdev_nvme_attach_controller" 00:08:49.007 } 00:08:49.007 EOF 00:08:49.007 )") 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:49.007 17:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:49.007 "params": { 00:08:49.007 "name": "Nvme0", 00:08:49.007 "trtype": "tcp", 00:08:49.007 "traddr": "10.0.0.3", 00:08:49.007 "adrfam": "ipv4", 00:08:49.007 "trsvcid": "4420", 00:08:49.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:49.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:49.007 "hdgst": false, 00:08:49.007 "ddgst": false 00:08:49.007 }, 00:08:49.007 "method": "bdev_nvme_attach_controller" 00:08:49.007 }' 00:08:49.267 [2024-11-20 17:29:30.845059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:49.267 [2024-11-20 17:29:30.845172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78268 ] 00:08:49.267 [2024-11-20 17:29:30.989447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.531 [2024-11-20 17:29:31.053429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.531 Running I/O for 10 seconds... 00:08:49.531 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.531 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:49.531 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:49.531 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.531 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:49.792 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.053 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.053 [2024-11-20 17:29:31.695700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.695984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.695995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.053 [2024-11-20 17:29:31.696300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.053 [2024-11-20 17:29:31.696308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.696985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.696993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.697003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.697011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.697021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.697034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.697044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.697052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.697062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.054 [2024-11-20 17:29:31.697071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.054 [2024-11-20 17:29:31.697096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec6060 is same with the state(6) to be set 00:08:50.055 [2024-11-20 17:29:31.697185] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xec6060 was disconnected and freed. reset controller. 00:08:50.055 task offset: 96896 on job bdev=Nvme0n1 fails 00:08:50.055 00:08:50.055 Latency(us) 00:08:50.055 [2024-11-20T17:29:31.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.055 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:50.055 Job: Nvme0n1 ended in about 0.46 seconds with error 00:08:50.055 Verification LBA range: start 0x0 length 0x400 00:08:50.055 Nvme0n1 : 0.46 1542.13 96.38 140.19 0.00 36531.42 2219.29 42896.29 00:08:50.055 [2024-11-20T17:29:31.843Z] =================================================================================================================== 00:08:50.055 [2024-11-20T17:29:31.843Z] Total : 1542.13 96.38 140.19 0.00 36531.42 2219.29 42896.29 00:08:50.055 [2024-11-20 17:29:31.698293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:50.055 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.055 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:50.055 [2024-11-20 17:29:31.700316] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.055 [2024-11-20 17:29:31.700343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54530 (9): Bad file descriptor 00:08:50.055 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.055 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.055 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.055 17:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:50.055 [2024-11-20 17:29:31.708967] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 78268 00:08:50.992 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (78268) - No such process 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:50.992 { 00:08:50.992 "params": { 00:08:50.992 "name": "Nvme$subsystem", 00:08:50.992 "trtype": "$TEST_TRANSPORT", 00:08:50.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.992 "adrfam": "ipv4", 00:08:50.992 "trsvcid": "$NVMF_PORT", 00:08:50.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.992 "hdgst": ${hdgst:-false}, 00:08:50.992 "ddgst": ${ddgst:-false} 00:08:50.992 }, 00:08:50.992 "method": "bdev_nvme_attach_controller" 00:08:50.992 } 00:08:50.992 EOF 00:08:50.992 )") 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:50.992 17:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:50.992 "params": { 00:08:50.992 "name": "Nvme0", 00:08:50.992 "trtype": "tcp", 00:08:50.992 "traddr": "10.0.0.3", 00:08:50.992 "adrfam": "ipv4", 00:08:50.992 "trsvcid": "4420", 00:08:50.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:50.992 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:50.992 "hdgst": false, 00:08:50.992 "ddgst": false 00:08:50.992 }, 00:08:50.992 "method": "bdev_nvme_attach_controller" 00:08:50.992 }' 00:08:50.992 [2024-11-20 17:29:32.768471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:50.992 [2024-11-20 17:29:32.768554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78314 ] 00:08:51.251 [2024-11-20 17:29:32.902869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.251 [2024-11-20 17:29:32.955915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.511 Running I/O for 1 seconds... 00:08:52.448 1792.00 IOPS, 112.00 MiB/s 00:08:52.448 Latency(us) 00:08:52.448 [2024-11-20T17:29:34.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.448 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:52.448 Verification LBA range: start 0x0 length 0x400 00:08:52.448 Nvme0n1 : 1.02 1811.67 113.23 0.00 0.00 34705.53 4617.31 33602.09 00:08:52.448 [2024-11-20T17:29:34.236Z] =================================================================================================================== 00:08:52.448 [2024-11-20T17:29:34.236Z] Total : 1811.67 113.23 0.00 0.00 34705.53 4617.31 33602.09 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.707 rmmod nvme_tcp 00:08:52.707 rmmod nvme_fabrics 00:08:52.707 rmmod nvme_keyring 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 78215 ']' 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 78215 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 78215 ']' 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 78215 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.707 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78215 00:08:52.966 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:52.966 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:52.966 killing process with pid 78215 00:08:52.966 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78215' 00:08:52.966 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 78215 00:08:52.966 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 78215 00:08:53.224 [2024-11-20 17:29:34.773024] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:53.224 17:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:53.224 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:53.483 00:08:53.483 real 0m5.559s 00:08:53.483 user 0m19.631s 00:08:53.483 sys 0m1.471s 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.483 ************************************ 00:08:53.483 END TEST nvmf_host_management 00:08:53.483 ************************************ 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.483 ************************************ 00:08:53.483 START TEST nvmf_lvol 00:08:53.483 ************************************ 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.483 * Looking for test storage... 00:08:53.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.483 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.743 --rc genhtml_branch_coverage=1 00:08:53.743 --rc genhtml_function_coverage=1 00:08:53.743 --rc genhtml_legend=1 00:08:53.743 --rc geninfo_all_blocks=1 00:08:53.743 --rc geninfo_unexecuted_blocks=1 00:08:53.743 00:08:53.743 ' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.743 --rc genhtml_branch_coverage=1 00:08:53.743 --rc genhtml_function_coverage=1 00:08:53.743 --rc genhtml_legend=1 00:08:53.743 --rc geninfo_all_blocks=1 00:08:53.743 --rc geninfo_unexecuted_blocks=1 00:08:53.743 00:08:53.743 ' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.743 --rc genhtml_branch_coverage=1 00:08:53.743 --rc genhtml_function_coverage=1 00:08:53.743 --rc genhtml_legend=1 00:08:53.743 --rc geninfo_all_blocks=1 00:08:53.743 --rc geninfo_unexecuted_blocks=1 00:08:53.743 00:08:53.743 ' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.743 --rc genhtml_branch_coverage=1 00:08:53.743 --rc genhtml_function_coverage=1 00:08:53.743 --rc genhtml_legend=1 00:08:53.743 --rc geninfo_all_blocks=1 00:08:53.743 --rc geninfo_unexecuted_blocks=1 00:08:53.743 00:08:53.743 ' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.743 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.744 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:53.744 Cannot find device "nvmf_init_br" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:53.744 Cannot find device "nvmf_init_br2" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:53.744 Cannot find device "nvmf_tgt_br" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.744 Cannot find device "nvmf_tgt_br2" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:53.744 Cannot find device "nvmf_init_br" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:53.744 Cannot find device "nvmf_init_br2" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:53.744 Cannot find device "nvmf_tgt_br" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:53.744 Cannot find device "nvmf_tgt_br2" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:53.744 Cannot find device "nvmf_br" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:53.744 Cannot find device "nvmf_init_if" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:53.744 Cannot find device "nvmf_init_if2" 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.744 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:54.003 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:54.003 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:54.003 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:54.003 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:54.003 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:54.003 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:54.003 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:54.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:54.004 00:08:54.004 --- 10.0.0.3 ping statistics --- 00:08:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.004 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:54.004 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:54.004 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:08:54.004 00:08:54.004 --- 10.0.0.4 ping statistics --- 00:08:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.004 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:54.004 00:08:54.004 --- 10.0.0.1 ping statistics --- 00:08:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.004 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:54.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:08:54.004 00:08:54.004 --- 10.0.0.2 ping statistics --- 00:08:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.004 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=78586 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 78586 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 78586 ']' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.004 17:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.004 [2024-11-20 17:29:35.780140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:54.004 [2024-11-20 17:29:35.780229] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.263 [2024-11-20 17:29:35.924219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.263 [2024-11-20 17:29:35.991426] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.263 [2024-11-20 17:29:35.991490] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.263 [2024-11-20 17:29:35.991525] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.263 [2024-11-20 17:29:35.991537] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.263 [2024-11-20 17:29:35.991546] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.263 [2024-11-20 17:29:35.991866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.263 [2024-11-20 17:29:35.995168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.263 [2024-11-20 17:29:35.995198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.199 17:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.199 17:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:55.199 17:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:55.199 17:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.199 17:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:55.199 17:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.199 17:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:55.457 [2024-11-20 17:29:37.168166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.457 17:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.024 17:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:56.024 17:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.284 17:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:56.284 17:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:56.542 17:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:56.801 17:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d9402b48-0851-423e-8884-000bc27c971a 00:08:56.801 17:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d9402b48-0851-423e-8884-000bc27c971a lvol 20 00:08:57.061 17:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=32abcfaa-953f-45dc-8b2e-ef1c9d8c7781 00:08:57.061 17:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:57.320 17:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32abcfaa-953f-45dc-8b2e-ef1c9d8c7781 00:08:57.320 17:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:57.588 [2024-11-20 17:29:39.285917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:57.588 17:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:57.859 17:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=78739 00:08:57.859 17:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:57.859 17:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:58.794 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 32abcfaa-953f-45dc-8b2e-ef1c9d8c7781 MY_SNAPSHOT 00:08:59.362 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c87b6d68-10c8-4de0-94b2-7d2592d4016c 00:08:59.362 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 32abcfaa-953f-45dc-8b2e-ef1c9d8c7781 30 00:08:59.622 17:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone c87b6d68-10c8-4de0-94b2-7d2592d4016c MY_CLONE 00:08:59.881 17:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c1633da0-8ef0-425c-9ba8-b5a0ce0d6ecc 00:08:59.881 17:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c1633da0-8ef0-425c-9ba8-b5a0ce0d6ecc 00:09:00.818 17:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 78739 00:09:08.937 Initializing NVMe Controllers 00:09:08.937 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.937 Controller IO queue size 128, less than required. 00:09:08.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.937 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:08.937 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:08.937 Initialization complete. Launching workers. 00:09:08.937 ======================================================== 00:09:08.937 Latency(us) 00:09:08.937 Device Information : IOPS MiB/s Average min max 00:09:08.937 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7940.10 31.02 16122.06 2172.77 142207.19 00:09:08.937 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7528.50 29.41 17008.08 3845.42 61090.93 00:09:08.937 ======================================================== 00:09:08.937 Total : 15468.59 60.42 16553.28 2172.77 142207.19 00:09:08.937 00:09:08.937 17:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.937 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 32abcfaa-953f-45dc-8b2e-ef1c9d8c7781 00:09:08.937 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9402b48-0851-423e-8884-000bc27c971a 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.197 rmmod nvme_tcp 00:09:09.197 rmmod nvme_fabrics 00:09:09.197 rmmod nvme_keyring 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 78586 ']' 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 78586 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 78586 ']' 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 78586 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78586 00:09:09.197 killing process with pid 78586 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78586' 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 78586 00:09:09.197 17:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 78586 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.456 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:09.715 ************************************ 00:09:09.715 END TEST nvmf_lvol 00:09:09.715 ************************************ 00:09:09.715 00:09:09.715 real 0m16.248s 00:09:09.715 user 1m7.632s 00:09:09.715 sys 0m3.282s 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.715 ************************************ 00:09:09.715 START TEST nvmf_lvs_grow 00:09:09.715 ************************************ 00:09:09.715 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:09.975 * Looking for test storage... 00:09:09.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.975 --rc genhtml_branch_coverage=1 00:09:09.975 --rc genhtml_function_coverage=1 00:09:09.975 --rc genhtml_legend=1 00:09:09.975 --rc geninfo_all_blocks=1 00:09:09.975 --rc geninfo_unexecuted_blocks=1 00:09:09.975 00:09:09.975 ' 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.975 --rc genhtml_branch_coverage=1 00:09:09.975 --rc genhtml_function_coverage=1 00:09:09.975 --rc genhtml_legend=1 00:09:09.975 --rc geninfo_all_blocks=1 00:09:09.975 --rc geninfo_unexecuted_blocks=1 00:09:09.975 00:09:09.975 ' 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.975 --rc genhtml_branch_coverage=1 00:09:09.975 --rc genhtml_function_coverage=1 00:09:09.975 --rc genhtml_legend=1 00:09:09.975 --rc geninfo_all_blocks=1 00:09:09.975 --rc geninfo_unexecuted_blocks=1 00:09:09.975 00:09:09.975 ' 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.975 --rc genhtml_branch_coverage=1 00:09:09.975 --rc genhtml_function_coverage=1 00:09:09.975 --rc genhtml_legend=1 00:09:09.975 --rc geninfo_all_blocks=1 00:09:09.975 --rc geninfo_unexecuted_blocks=1 00:09:09.975 00:09:09.975 ' 00:09:09.975 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.976 Cannot find device "nvmf_init_br" 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.976 Cannot find device "nvmf_init_br2" 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.976 Cannot find device "nvmf_tgt_br" 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.976 Cannot find device "nvmf_tgt_br2" 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.976 Cannot find device "nvmf_init_br" 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.976 Cannot find device "nvmf_init_br2" 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.976 Cannot find device "nvmf_tgt_br" 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:09.976 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:10.235 Cannot find device "nvmf_tgt_br2" 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:10.236 Cannot find device "nvmf_br" 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:10.236 Cannot find device "nvmf_init_if" 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:10.236 Cannot find device "nvmf_init_if2" 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.236 17:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.236 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.236 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.236 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.236 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.236 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:10.495 00:09:10.495 --- 10.0.0.3 ping statistics --- 00:09:10.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.495 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:09:10.495 00:09:10.495 --- 10.0.0.4 ping statistics --- 00:09:10.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.495 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:10.495 00:09:10.495 --- 10.0.0.1 ping statistics --- 00:09:10.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.495 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:09:10.495 00:09:10.495 --- 10.0.0.2 ping statistics --- 00:09:10.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.495 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=79158 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 79158 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 79158 ']' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.495 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.495 [2024-11-20 17:29:52.156543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:10.495 [2024-11-20 17:29:52.156810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.755 [2024-11-20 17:29:52.299085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.755 [2024-11-20 17:29:52.358432] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.755 [2024-11-20 17:29:52.358517] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.755 [2024-11-20 17:29:52.358537] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.755 [2024-11-20 17:29:52.358544] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.755 [2024-11-20 17:29:52.358551] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.755 [2024-11-20 17:29:52.358578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.755 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.755 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:10.755 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:10.755 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.755 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.755 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.755 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.322 [2024-11-20 17:29:52.815357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.322 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.323 ************************************ 00:09:11.323 START TEST lvs_grow_clean 00:09:11.323 ************************************ 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.323 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.581 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:11.581 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:11.840 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=aab6a5b4-c217-481d-8b95-886282e66594 00:09:11.840 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:11.840 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:12.102 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:12.102 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:12.102 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aab6a5b4-c217-481d-8b95-886282e66594 lvol 150 00:09:12.402 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=44c7df73-2ed0-4b73-82b3-13c35f0c4133 00:09:12.402 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:12.402 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:12.402 [2024-11-20 17:29:54.111748] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:12.402 [2024-11-20 17:29:54.111812] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:12.402 true 00:09:12.402 17:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:12.402 17:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:12.674 17:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:12.674 17:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:12.933 17:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44c7df73-2ed0-4b73-82b3-13c35f0c4133 00:09:13.191 17:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:13.450 [2024-11-20 17:29:55.220289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79306 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79306 /var/tmp/bdevperf.sock 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 79306 ']' 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:13.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.709 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:13.967 [2024-11-20 17:29:55.506169] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:13.967 [2024-11-20 17:29:55.506254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79306 ] 00:09:13.967 [2024-11-20 17:29:55.642478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.967 [2024-11-20 17:29:55.730199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.901 17:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.901 17:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:14.901 17:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.160 Nvme0n1 00:09:15.160 17:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.419 [ 00:09:15.419 { 00:09:15.419 "aliases": [ 00:09:15.419 "44c7df73-2ed0-4b73-82b3-13c35f0c4133" 00:09:15.419 ], 00:09:15.419 "assigned_rate_limits": { 00:09:15.419 "r_mbytes_per_sec": 0, 00:09:15.419 "rw_ios_per_sec": 0, 00:09:15.419 "rw_mbytes_per_sec": 0, 00:09:15.419 "w_mbytes_per_sec": 0 00:09:15.419 }, 00:09:15.419 "block_size": 4096, 00:09:15.419 "claimed": false, 00:09:15.419 "driver_specific": { 00:09:15.419 "mp_policy": "active_passive", 00:09:15.419 "nvme": [ 00:09:15.419 { 00:09:15.419 "ctrlr_data": { 00:09:15.419 "ana_reporting": false, 00:09:15.419 "cntlid": 1, 00:09:15.419 "firmware_revision": "24.09.1", 00:09:15.419 "model_number": "SPDK bdev Controller", 00:09:15.419 "multi_ctrlr": true, 00:09:15.419 "oacs": { 00:09:15.419 "firmware": 0, 00:09:15.419 "format": 0, 00:09:15.419 "ns_manage": 0, 00:09:15.419 "security": 0 00:09:15.419 }, 00:09:15.419 "serial_number": "SPDK0", 00:09:15.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.419 "vendor_id": "0x8086" 00:09:15.419 }, 00:09:15.419 "ns_data": { 00:09:15.419 "can_share": true, 00:09:15.419 "id": 1 00:09:15.419 }, 00:09:15.419 "trid": { 00:09:15.419 "adrfam": "IPv4", 00:09:15.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.419 "traddr": "10.0.0.3", 00:09:15.419 "trsvcid": "4420", 00:09:15.419 "trtype": "TCP" 00:09:15.419 }, 00:09:15.419 "vs": { 00:09:15.419 "nvme_version": "1.3" 00:09:15.419 } 00:09:15.419 } 00:09:15.419 ] 00:09:15.419 }, 00:09:15.419 "memory_domains": [ 00:09:15.419 { 00:09:15.419 "dma_device_id": "system", 00:09:15.419 "dma_device_type": 1 00:09:15.419 } 00:09:15.419 ], 00:09:15.419 "name": "Nvme0n1", 00:09:15.419 "num_blocks": 38912, 00:09:15.419 "numa_id": -1, 00:09:15.419 "product_name": "NVMe disk", 00:09:15.419 "supported_io_types": { 00:09:15.419 "abort": true, 00:09:15.419 "compare": true, 00:09:15.419 "compare_and_write": true, 00:09:15.419 "copy": true, 00:09:15.419 "flush": true, 00:09:15.419 "get_zone_info": false, 00:09:15.419 "nvme_admin": true, 00:09:15.419 "nvme_io": true, 00:09:15.419 "nvme_io_md": false, 00:09:15.419 "nvme_iov_md": false, 00:09:15.419 "read": true, 00:09:15.419 "reset": true, 00:09:15.419 "seek_data": false, 00:09:15.419 "seek_hole": false, 00:09:15.419 "unmap": true, 00:09:15.419 "write": true, 00:09:15.419 "write_zeroes": true, 00:09:15.419 "zcopy": false, 00:09:15.419 "zone_append": false, 00:09:15.419 "zone_management": false 00:09:15.419 }, 00:09:15.419 "uuid": "44c7df73-2ed0-4b73-82b3-13c35f0c4133", 00:09:15.419 "zoned": false 00:09:15.419 } 00:09:15.419 ] 00:09:15.419 17:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79354 00:09:15.419 17:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.419 17:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.419 Running I/O for 10 seconds... 00:09:16.357 Latency(us) 00:09:16.357 [2024-11-20T17:29:58.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.357 Nvme0n1 : 1.00 7396.00 28.89 0.00 0.00 0.00 0.00 0.00 00:09:16.357 [2024-11-20T17:29:58.145Z] =================================================================================================================== 00:09:16.357 [2024-11-20T17:29:58.145Z] Total : 7396.00 28.89 0.00 0.00 0.00 0.00 0.00 00:09:16.357 00:09:17.293 17:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:17.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.552 Nvme0n1 : 2.00 7372.50 28.80 0.00 0.00 0.00 0.00 0.00 00:09:17.552 [2024-11-20T17:29:59.340Z] =================================================================================================================== 00:09:17.552 [2024-11-20T17:29:59.340Z] Total : 7372.50 28.80 0.00 0.00 0.00 0.00 0.00 00:09:17.552 00:09:17.811 true 00:09:17.811 17:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:17.811 17:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:18.070 17:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:18.070 17:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:18.070 17:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 79354 00:09:18.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.329 Nvme0n1 : 3.00 7432.00 29.03 0.00 0.00 0.00 0.00 0.00 00:09:18.329 [2024-11-20T17:30:00.117Z] =================================================================================================================== 00:09:18.329 [2024-11-20T17:30:00.117Z] Total : 7432.00 29.03 0.00 0.00 0.00 0.00 0.00 00:09:18.329 00:09:19.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.705 Nvme0n1 : 4.00 7371.75 28.80 0.00 0.00 0.00 0.00 0.00 00:09:19.705 [2024-11-20T17:30:01.493Z] =================================================================================================================== 00:09:19.705 [2024-11-20T17:30:01.493Z] Total : 7371.75 28.80 0.00 0.00 0.00 0.00 0.00 00:09:19.705 00:09:20.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.641 Nvme0n1 : 5.00 7322.20 28.60 0.00 0.00 0.00 0.00 0.00 00:09:20.641 [2024-11-20T17:30:02.429Z] =================================================================================================================== 00:09:20.641 [2024-11-20T17:30:02.429Z] Total : 7322.20 28.60 0.00 0.00 0.00 0.00 0.00 00:09:20.641 00:09:21.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.578 Nvme0n1 : 6.00 7252.17 28.33 0.00 0.00 0.00 0.00 0.00 00:09:21.578 [2024-11-20T17:30:03.366Z] =================================================================================================================== 00:09:21.578 [2024-11-20T17:30:03.366Z] Total : 7252.17 28.33 0.00 0.00 0.00 0.00 0.00 00:09:21.578 00:09:22.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.512 Nvme0n1 : 7.00 7218.86 28.20 0.00 0.00 0.00 0.00 0.00 00:09:22.512 [2024-11-20T17:30:04.300Z] =================================================================================================================== 00:09:22.512 [2024-11-20T17:30:04.300Z] Total : 7218.86 28.20 0.00 0.00 0.00 0.00 0.00 00:09:22.512 00:09:23.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.449 Nvme0n1 : 8.00 7132.00 27.86 0.00 0.00 0.00 0.00 0.00 00:09:23.449 [2024-11-20T17:30:05.237Z] =================================================================================================================== 00:09:23.449 [2024-11-20T17:30:05.237Z] Total : 7132.00 27.86 0.00 0.00 0.00 0.00 0.00 00:09:23.449 00:09:24.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.385 Nvme0n1 : 9.00 7114.11 27.79 0.00 0.00 0.00 0.00 0.00 00:09:24.385 [2024-11-20T17:30:06.173Z] =================================================================================================================== 00:09:24.385 [2024-11-20T17:30:06.173Z] Total : 7114.11 27.79 0.00 0.00 0.00 0.00 0.00 00:09:24.385 00:09:25.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.762 Nvme0n1 : 10.00 7130.40 27.85 0.00 0.00 0.00 0.00 0.00 00:09:25.762 [2024-11-20T17:30:07.550Z] =================================================================================================================== 00:09:25.762 [2024-11-20T17:30:07.550Z] Total : 7130.40 27.85 0.00 0.00 0.00 0.00 0.00 00:09:25.762 00:09:25.762 00:09:25.762 Latency(us) 00:09:25.762 [2024-11-20T17:30:07.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.762 Nvme0n1 : 10.02 7131.40 27.86 0.00 0.00 17943.24 8102.63 76736.70 00:09:25.762 [2024-11-20T17:30:07.550Z] =================================================================================================================== 00:09:25.762 [2024-11-20T17:30:07.550Z] Total : 7131.40 27.86 0.00 0.00 17943.24 8102.63 76736.70 00:09:25.762 { 00:09:25.762 "results": [ 00:09:25.762 { 00:09:25.762 "job": "Nvme0n1", 00:09:25.762 "core_mask": "0x2", 00:09:25.762 "workload": "randwrite", 00:09:25.762 "status": "finished", 00:09:25.762 "queue_depth": 128, 00:09:25.762 "io_size": 4096, 00:09:25.762 "runtime": 10.016541, 00:09:25.762 "iops": 7131.403944735014, 00:09:25.762 "mibps": 27.85704665912115, 00:09:25.762 "io_failed": 0, 00:09:25.762 "io_timeout": 0, 00:09:25.762 "avg_latency_us": 17943.23821826734, 00:09:25.762 "min_latency_us": 8102.632727272728, 00:09:25.762 "max_latency_us": 76736.69818181818 00:09:25.762 } 00:09:25.762 ], 00:09:25.762 "core_count": 1 00:09:25.762 } 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79306 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 79306 ']' 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 79306 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79306 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:25.762 killing process with pid 79306 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79306' 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 79306 00:09:25.762 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.762 00:09:25.762 Latency(us) 00:09:25.762 [2024-11-20T17:30:07.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.762 [2024-11-20T17:30:07.550Z] =================================================================================================================== 00:09:25.762 [2024-11-20T17:30:07.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 79306 00:09:25.762 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:26.025 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.284 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:26.284 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:26.543 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:26.543 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:26.543 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.802 [2024-11-20 17:30:08.486758] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.802 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:26.802 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:26.802 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:26.802 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:26.803 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:27.062 2024/11/20 17:30:08 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:aab6a5b4-c217-481d-8b95-886282e66594], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:27.062 request: 00:09:27.062 { 00:09:27.062 "method": "bdev_lvol_get_lvstores", 00:09:27.062 "params": { 00:09:27.062 "uuid": "aab6a5b4-c217-481d-8b95-886282e66594" 00:09:27.062 } 00:09:27.062 } 00:09:27.062 Got JSON-RPC error response 00:09:27.062 GoRPCClient: error on JSON-RPC call 00:09:27.062 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:27.062 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.062 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.062 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.062 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.321 aio_bdev 00:09:27.321 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 44c7df73-2ed0-4b73-82b3-13c35f0c4133 00:09:27.321 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=44c7df73-2ed0-4b73-82b3-13c35f0c4133 00:09:27.321 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.321 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:27.321 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.321 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.321 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.584 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 44c7df73-2ed0-4b73-82b3-13c35f0c4133 -t 2000 00:09:27.842 [ 00:09:27.842 { 00:09:27.842 "aliases": [ 00:09:27.842 "lvs/lvol" 00:09:27.842 ], 00:09:27.842 "assigned_rate_limits": { 00:09:27.842 "r_mbytes_per_sec": 0, 00:09:27.842 "rw_ios_per_sec": 0, 00:09:27.842 "rw_mbytes_per_sec": 0, 00:09:27.842 "w_mbytes_per_sec": 0 00:09:27.842 }, 00:09:27.842 "block_size": 4096, 00:09:27.842 "claimed": false, 00:09:27.842 "driver_specific": { 00:09:27.842 "lvol": { 00:09:27.842 "base_bdev": "aio_bdev", 00:09:27.842 "clone": false, 00:09:27.842 "esnap_clone": false, 00:09:27.842 "lvol_store_uuid": "aab6a5b4-c217-481d-8b95-886282e66594", 00:09:27.842 "num_allocated_clusters": 38, 00:09:27.842 "snapshot": false, 00:09:27.842 "thin_provision": false 00:09:27.842 } 00:09:27.842 }, 00:09:27.842 "name": "44c7df73-2ed0-4b73-82b3-13c35f0c4133", 00:09:27.842 "num_blocks": 38912, 00:09:27.842 "product_name": "Logical Volume", 00:09:27.842 "supported_io_types": { 00:09:27.842 "abort": false, 00:09:27.842 "compare": false, 00:09:27.842 "compare_and_write": false, 00:09:27.842 "copy": false, 00:09:27.842 "flush": false, 00:09:27.842 "get_zone_info": false, 00:09:27.842 "nvme_admin": false, 00:09:27.842 "nvme_io": false, 00:09:27.842 "nvme_io_md": false, 00:09:27.842 "nvme_iov_md": false, 00:09:27.842 "read": true, 00:09:27.842 "reset": true, 00:09:27.842 "seek_data": true, 00:09:27.842 "seek_hole": true, 00:09:27.842 "unmap": true, 00:09:27.842 "write": true, 00:09:27.842 "write_zeroes": true, 00:09:27.843 "zcopy": false, 00:09:27.843 "zone_append": false, 00:09:27.843 "zone_management": false 00:09:27.843 }, 00:09:27.843 "uuid": "44c7df73-2ed0-4b73-82b3-13c35f0c4133", 00:09:27.843 "zoned": false 00:09:27.843 } 00:09:27.843 ] 00:09:27.843 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:27.843 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:27.843 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:28.101 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:28.101 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:28.101 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:28.359 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:28.359 17:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 44c7df73-2ed0-4b73-82b3-13c35f0c4133 00:09:28.617 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aab6a5b4-c217-481d-8b95-886282e66594 00:09:28.876 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.134 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.701 ************************************ 00:09:29.701 END TEST lvs_grow_clean 00:09:29.701 ************************************ 00:09:29.701 00:09:29.701 real 0m18.349s 00:09:29.701 user 0m17.539s 00:09:29.701 sys 0m2.341s 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:29.701 ************************************ 00:09:29.701 START TEST lvs_grow_dirty 00:09:29.701 ************************************ 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.701 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.959 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:29.959 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:30.218 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:30.218 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:30.218 17:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:30.477 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:30.477 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:30.477 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad lvol 150 00:09:30.737 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=168a55e7-23a4-457f-851c-2648dd25452b 00:09:30.737 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:30.737 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:30.995 [2024-11-20 17:30:12.570306] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:30.995 [2024-11-20 17:30:12.570379] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:30.995 true 00:09:30.995 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:30.995 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:31.253 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:31.253 17:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:31.253 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 168a55e7-23a4-457f-851c-2648dd25452b 00:09:31.511 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:31.768 [2024-11-20 17:30:13.498793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:31.768 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:32.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79753 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79753 /var/tmp/bdevperf.sock 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 79753 ']' 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.026 17:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.026 [2024-11-20 17:30:13.773624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:32.026 [2024-11-20 17:30:13.773716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79753 ] 00:09:32.284 [2024-11-20 17:30:13.910089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.284 [2024-11-20 17:30:14.004487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.220 17:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.220 17:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:33.220 17:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:33.220 Nvme0n1 00:09:33.220 17:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:33.478 [ 00:09:33.478 { 00:09:33.478 "aliases": [ 00:09:33.478 "168a55e7-23a4-457f-851c-2648dd25452b" 00:09:33.478 ], 00:09:33.478 "assigned_rate_limits": { 00:09:33.478 "r_mbytes_per_sec": 0, 00:09:33.478 "rw_ios_per_sec": 0, 00:09:33.478 "rw_mbytes_per_sec": 0, 00:09:33.478 "w_mbytes_per_sec": 0 00:09:33.478 }, 00:09:33.478 "block_size": 4096, 00:09:33.478 "claimed": false, 00:09:33.478 "driver_specific": { 00:09:33.478 "mp_policy": "active_passive", 00:09:33.478 "nvme": [ 00:09:33.478 { 00:09:33.478 "ctrlr_data": { 00:09:33.478 "ana_reporting": false, 00:09:33.478 "cntlid": 1, 00:09:33.478 "firmware_revision": "24.09.1", 00:09:33.478 "model_number": "SPDK bdev Controller", 00:09:33.478 "multi_ctrlr": true, 00:09:33.478 "oacs": { 00:09:33.478 "firmware": 0, 00:09:33.478 "format": 0, 00:09:33.478 "ns_manage": 0, 00:09:33.478 "security": 0 00:09:33.478 }, 00:09:33.478 "serial_number": "SPDK0", 00:09:33.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:33.478 "vendor_id": "0x8086" 00:09:33.479 }, 00:09:33.479 "ns_data": { 00:09:33.479 "can_share": true, 00:09:33.479 "id": 1 00:09:33.479 }, 00:09:33.479 "trid": { 00:09:33.479 "adrfam": "IPv4", 00:09:33.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:33.479 "traddr": "10.0.0.3", 00:09:33.479 "trsvcid": "4420", 00:09:33.479 "trtype": "TCP" 00:09:33.479 }, 00:09:33.479 "vs": { 00:09:33.479 "nvme_version": "1.3" 00:09:33.479 } 00:09:33.479 } 00:09:33.479 ] 00:09:33.479 }, 00:09:33.479 "memory_domains": [ 00:09:33.479 { 00:09:33.479 "dma_device_id": "system", 00:09:33.479 "dma_device_type": 1 00:09:33.479 } 00:09:33.479 ], 00:09:33.479 "name": "Nvme0n1", 00:09:33.479 "num_blocks": 38912, 00:09:33.479 "numa_id": -1, 00:09:33.479 "product_name": "NVMe disk", 00:09:33.479 "supported_io_types": { 00:09:33.479 "abort": true, 00:09:33.479 "compare": true, 00:09:33.479 "compare_and_write": true, 00:09:33.479 "copy": true, 00:09:33.479 "flush": true, 00:09:33.479 "get_zone_info": false, 00:09:33.479 "nvme_admin": true, 00:09:33.479 "nvme_io": true, 00:09:33.479 "nvme_io_md": false, 00:09:33.479 "nvme_iov_md": false, 00:09:33.479 "read": true, 00:09:33.479 "reset": true, 00:09:33.479 "seek_data": false, 00:09:33.479 "seek_hole": false, 00:09:33.479 "unmap": true, 00:09:33.479 "write": true, 00:09:33.479 "write_zeroes": true, 00:09:33.479 "zcopy": false, 00:09:33.479 "zone_append": false, 00:09:33.479 "zone_management": false 00:09:33.479 }, 00:09:33.479 "uuid": "168a55e7-23a4-457f-851c-2648dd25452b", 00:09:33.479 "zoned": false 00:09:33.479 } 00:09:33.479 ] 00:09:33.479 17:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79801 00:09:33.479 17:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:33.479 17:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:33.479 Running I/O for 10 seconds... 00:09:34.856 Latency(us) 00:09:34.856 [2024-11-20T17:30:16.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.856 Nvme0n1 : 1.00 7527.00 29.40 0.00 0.00 0.00 0.00 0.00 00:09:34.856 [2024-11-20T17:30:16.644Z] =================================================================================================================== 00:09:34.856 [2024-11-20T17:30:16.644Z] Total : 7527.00 29.40 0.00 0.00 0.00 0.00 0.00 00:09:34.856 00:09:35.424 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:35.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.683 Nvme0n1 : 2.00 7484.00 29.23 0.00 0.00 0.00 0.00 0.00 00:09:35.683 [2024-11-20T17:30:17.471Z] =================================================================================================================== 00:09:35.683 [2024-11-20T17:30:17.471Z] Total : 7484.00 29.23 0.00 0.00 0.00 0.00 0.00 00:09:35.683 00:09:35.683 true 00:09:35.942 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:35.942 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:36.201 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:36.201 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:36.201 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 79801 00:09:36.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.769 Nvme0n1 : 3.00 7464.67 29.16 0.00 0.00 0.00 0.00 0.00 00:09:36.769 [2024-11-20T17:30:18.557Z] =================================================================================================================== 00:09:36.769 [2024-11-20T17:30:18.557Z] Total : 7464.67 29.16 0.00 0.00 0.00 0.00 0.00 00:09:36.769 00:09:37.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.704 Nvme0n1 : 4.00 7485.50 29.24 0.00 0.00 0.00 0.00 0.00 00:09:37.704 [2024-11-20T17:30:19.493Z] =================================================================================================================== 00:09:37.705 [2024-11-20T17:30:19.493Z] Total : 7485.50 29.24 0.00 0.00 0.00 0.00 0.00 00:09:37.705 00:09:38.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.641 Nvme0n1 : 5.00 7497.80 29.29 0.00 0.00 0.00 0.00 0.00 00:09:38.641 [2024-11-20T17:30:20.429Z] =================================================================================================================== 00:09:38.641 [2024-11-20T17:30:20.429Z] Total : 7497.80 29.29 0.00 0.00 0.00 0.00 0.00 00:09:38.641 00:09:39.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.577 Nvme0n1 : 6.00 7312.67 28.57 0.00 0.00 0.00 0.00 0.00 00:09:39.577 [2024-11-20T17:30:21.365Z] =================================================================================================================== 00:09:39.577 [2024-11-20T17:30:21.365Z] Total : 7312.67 28.57 0.00 0.00 0.00 0.00 0.00 00:09:39.577 00:09:40.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.514 Nvme0n1 : 7.00 7370.14 28.79 0.00 0.00 0.00 0.00 0.00 00:09:40.514 [2024-11-20T17:30:22.302Z] =================================================================================================================== 00:09:40.514 [2024-11-20T17:30:22.302Z] Total : 7370.14 28.79 0.00 0.00 0.00 0.00 0.00 00:09:40.514 00:09:41.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.485 Nvme0n1 : 8.00 7470.12 29.18 0.00 0.00 0.00 0.00 0.00 00:09:41.485 [2024-11-20T17:30:23.273Z] =================================================================================================================== 00:09:41.485 [2024-11-20T17:30:23.273Z] Total : 7470.12 29.18 0.00 0.00 0.00 0.00 0.00 00:09:41.485 00:09:42.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.872 Nvme0n1 : 9.00 7492.67 29.27 0.00 0.00 0.00 0.00 0.00 00:09:42.872 [2024-11-20T17:30:24.660Z] =================================================================================================================== 00:09:42.872 [2024-11-20T17:30:24.660Z] Total : 7492.67 29.27 0.00 0.00 0.00 0.00 0.00 00:09:42.872 00:09:43.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.807 Nvme0n1 : 10.00 7560.70 29.53 0.00 0.00 0.00 0.00 0.00 00:09:43.807 [2024-11-20T17:30:25.595Z] =================================================================================================================== 00:09:43.807 [2024-11-20T17:30:25.595Z] Total : 7560.70 29.53 0.00 0.00 0.00 0.00 0.00 00:09:43.807 00:09:43.807 00:09:43.807 Latency(us) 00:09:43.807 [2024-11-20T17:30:25.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.807 Nvme0n1 : 10.01 7566.44 29.56 0.00 0.00 16908.36 5868.45 172538.41 00:09:43.807 [2024-11-20T17:30:25.595Z] =================================================================================================================== 00:09:43.807 [2024-11-20T17:30:25.595Z] Total : 7566.44 29.56 0.00 0.00 16908.36 5868.45 172538.41 00:09:43.807 { 00:09:43.807 "results": [ 00:09:43.807 { 00:09:43.807 "job": "Nvme0n1", 00:09:43.807 "core_mask": "0x2", 00:09:43.807 "workload": "randwrite", 00:09:43.807 "status": "finished", 00:09:43.807 "queue_depth": 128, 00:09:43.807 "io_size": 4096, 00:09:43.807 "runtime": 10.00933, 00:09:43.807 "iops": 7566.440511003234, 00:09:43.807 "mibps": 29.556408246106383, 00:09:43.807 "io_failed": 0, 00:09:43.807 "io_timeout": 0, 00:09:43.807 "avg_latency_us": 16908.357619978753, 00:09:43.807 "min_latency_us": 5868.450909090909, 00:09:43.807 "max_latency_us": 172538.41454545455 00:09:43.807 } 00:09:43.807 ], 00:09:43.807 "core_count": 1 00:09:43.807 } 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79753 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 79753 ']' 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 79753 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79753 00:09:43.807 killing process with pid 79753 00:09:43.807 Received shutdown signal, test time was about 10.000000 seconds 00:09:43.807 00:09:43.807 Latency(us) 00:09:43.807 [2024-11-20T17:30:25.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.807 [2024-11-20T17:30:25.595Z] =================================================================================================================== 00:09:43.807 [2024-11-20T17:30:25.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79753' 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 79753 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 79753 00:09:43.807 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:44.065 17:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:44.323 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:44.323 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:44.580 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 79158 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 79158 00:09:44.581 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 79158 Killed "${NVMF_APP[@]}" "$@" 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=79963 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 79963 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 79963 ']' 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.581 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.838 [2024-11-20 17:30:26.427646] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:44.838 [2024-11-20 17:30:26.427745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.838 [2024-11-20 17:30:26.570702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.097 [2024-11-20 17:30:26.628774] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.097 [2024-11-20 17:30:26.628829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.097 [2024-11-20 17:30:26.628855] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.097 [2024-11-20 17:30:26.628863] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.097 [2024-11-20 17:30:26.628869] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.097 [2024-11-20 17:30:26.628901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.097 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.097 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:45.097 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:45.097 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.097 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.097 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.097 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:45.356 [2024-11-20 17:30:27.108009] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:45.356 [2024-11-20 17:30:27.108434] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:45.356 [2024-11-20 17:30:27.108601] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 168a55e7-23a4-457f-851c-2648dd25452b 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=168a55e7-23a4-457f-851c-2648dd25452b 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.614 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:45.872 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 168a55e7-23a4-457f-851c-2648dd25452b -t 2000 00:09:46.131 [ 00:09:46.131 { 00:09:46.131 "aliases": [ 00:09:46.131 "lvs/lvol" 00:09:46.131 ], 00:09:46.131 "assigned_rate_limits": { 00:09:46.131 "r_mbytes_per_sec": 0, 00:09:46.131 "rw_ios_per_sec": 0, 00:09:46.131 "rw_mbytes_per_sec": 0, 00:09:46.131 "w_mbytes_per_sec": 0 00:09:46.131 }, 00:09:46.131 "block_size": 4096, 00:09:46.131 "claimed": false, 00:09:46.131 "driver_specific": { 00:09:46.131 "lvol": { 00:09:46.131 "base_bdev": "aio_bdev", 00:09:46.131 "clone": false, 00:09:46.131 "esnap_clone": false, 00:09:46.131 "lvol_store_uuid": "6fa90e7a-6e6f-4839-a734-a661b60be7ad", 00:09:46.131 "num_allocated_clusters": 38, 00:09:46.131 "snapshot": false, 00:09:46.131 "thin_provision": false 00:09:46.131 } 00:09:46.131 }, 00:09:46.131 "name": "168a55e7-23a4-457f-851c-2648dd25452b", 00:09:46.131 "num_blocks": 38912, 00:09:46.131 "product_name": "Logical Volume", 00:09:46.131 "supported_io_types": { 00:09:46.131 "abort": false, 00:09:46.131 "compare": false, 00:09:46.131 "compare_and_write": false, 00:09:46.131 "copy": false, 00:09:46.131 "flush": false, 00:09:46.131 "get_zone_info": false, 00:09:46.131 "nvme_admin": false, 00:09:46.131 "nvme_io": false, 00:09:46.131 "nvme_io_md": false, 00:09:46.131 "nvme_iov_md": false, 00:09:46.131 "read": true, 00:09:46.131 "reset": true, 00:09:46.131 "seek_data": true, 00:09:46.131 "seek_hole": true, 00:09:46.131 "unmap": true, 00:09:46.131 "write": true, 00:09:46.131 "write_zeroes": true, 00:09:46.131 "zcopy": false, 00:09:46.131 "zone_append": false, 00:09:46.131 "zone_management": false 00:09:46.131 }, 00:09:46.131 "uuid": "168a55e7-23a4-457f-851c-2648dd25452b", 00:09:46.131 "zoned": false 00:09:46.131 } 00:09:46.131 ] 00:09:46.131 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:46.131 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:46.131 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:46.390 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:46.390 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:46.390 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:46.647 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:46.647 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:46.906 [2024-11-20 17:30:28.506197] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:46.906 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:47.165 2024/11/20 17:30:28 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6fa90e7a-6e6f-4839-a734-a661b60be7ad], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:47.165 request: 00:09:47.165 { 00:09:47.165 "method": "bdev_lvol_get_lvstores", 00:09:47.165 "params": { 00:09:47.165 "uuid": "6fa90e7a-6e6f-4839-a734-a661b60be7ad" 00:09:47.165 } 00:09:47.165 } 00:09:47.165 Got JSON-RPC error response 00:09:47.165 GoRPCClient: error on JSON-RPC call 00:09:47.165 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:47.165 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.165 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.165 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.165 17:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.423 aio_bdev 00:09:47.423 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 168a55e7-23a4-457f-851c-2648dd25452b 00:09:47.423 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=168a55e7-23a4-457f-851c-2648dd25452b 00:09:47.423 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.423 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:47.423 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.423 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.423 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:47.679 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 168a55e7-23a4-457f-851c-2648dd25452b -t 2000 00:09:47.938 [ 00:09:47.938 { 00:09:47.938 "aliases": [ 00:09:47.938 "lvs/lvol" 00:09:47.938 ], 00:09:47.938 "assigned_rate_limits": { 00:09:47.938 "r_mbytes_per_sec": 0, 00:09:47.938 "rw_ios_per_sec": 0, 00:09:47.938 "rw_mbytes_per_sec": 0, 00:09:47.938 "w_mbytes_per_sec": 0 00:09:47.938 }, 00:09:47.938 "block_size": 4096, 00:09:47.938 "claimed": false, 00:09:47.938 "driver_specific": { 00:09:47.938 "lvol": { 00:09:47.938 "base_bdev": "aio_bdev", 00:09:47.938 "clone": false, 00:09:47.938 "esnap_clone": false, 00:09:47.938 "lvol_store_uuid": "6fa90e7a-6e6f-4839-a734-a661b60be7ad", 00:09:47.938 "num_allocated_clusters": 38, 00:09:47.938 "snapshot": false, 00:09:47.938 "thin_provision": false 00:09:47.938 } 00:09:47.938 }, 00:09:47.938 "name": "168a55e7-23a4-457f-851c-2648dd25452b", 00:09:47.938 "num_blocks": 38912, 00:09:47.938 "product_name": "Logical Volume", 00:09:47.938 "supported_io_types": { 00:09:47.938 "abort": false, 00:09:47.938 "compare": false, 00:09:47.938 "compare_and_write": false, 00:09:47.938 "copy": false, 00:09:47.938 "flush": false, 00:09:47.938 "get_zone_info": false, 00:09:47.938 "nvme_admin": false, 00:09:47.938 "nvme_io": false, 00:09:47.938 "nvme_io_md": false, 00:09:47.938 "nvme_iov_md": false, 00:09:47.938 "read": true, 00:09:47.938 "reset": true, 00:09:47.938 "seek_data": true, 00:09:47.938 "seek_hole": true, 00:09:47.938 "unmap": true, 00:09:47.938 "write": true, 00:09:47.938 "write_zeroes": true, 00:09:47.938 "zcopy": false, 00:09:47.938 "zone_append": false, 00:09:47.938 "zone_management": false 00:09:47.938 }, 00:09:47.938 "uuid": "168a55e7-23a4-457f-851c-2648dd25452b", 00:09:47.938 "zoned": false 00:09:47.938 } 00:09:47.938 ] 00:09:47.938 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:47.938 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:47.938 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:48.196 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:48.196 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:48.196 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:48.455 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:48.455 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 168a55e7-23a4-457f-851c-2648dd25452b 00:09:48.713 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6fa90e7a-6e6f-4839-a734-a661b60be7ad 00:09:48.971 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:49.230 17:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:49.798 ************************************ 00:09:49.798 END TEST lvs_grow_dirty 00:09:49.798 ************************************ 00:09:49.798 00:09:49.798 real 0m20.079s 00:09:49.798 user 0m39.431s 00:09:49.798 sys 0m10.040s 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:49.798 nvmf_trace.0 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:49.798 17:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:50.365 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.366 rmmod nvme_tcp 00:09:50.366 rmmod nvme_fabrics 00:09:50.366 rmmod nvme_keyring 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 79963 ']' 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 79963 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 79963 ']' 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 79963 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79963 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.366 killing process with pid 79963 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79963' 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 79963 00:09:50.366 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 79963 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:50.625 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:50.884 00:09:50.884 real 0m41.124s 00:09:50.884 user 1m3.496s 00:09:50.884 sys 0m13.699s 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:50.884 ************************************ 00:09:50.884 END TEST nvmf_lvs_grow 00:09:50.884 ************************************ 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.884 ************************************ 00:09:50.884 START TEST nvmf_bdev_io_wait 00:09:50.884 ************************************ 00:09:50.884 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:51.144 * Looking for test storage... 00:09:51.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:51.144 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:51.144 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:51.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.145 --rc genhtml_branch_coverage=1 00:09:51.145 --rc genhtml_function_coverage=1 00:09:51.145 --rc genhtml_legend=1 00:09:51.145 --rc geninfo_all_blocks=1 00:09:51.145 --rc geninfo_unexecuted_blocks=1 00:09:51.145 00:09:51.145 ' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:51.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.145 --rc genhtml_branch_coverage=1 00:09:51.145 --rc genhtml_function_coverage=1 00:09:51.145 --rc genhtml_legend=1 00:09:51.145 --rc geninfo_all_blocks=1 00:09:51.145 --rc geninfo_unexecuted_blocks=1 00:09:51.145 00:09:51.145 ' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:51.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.145 --rc genhtml_branch_coverage=1 00:09:51.145 --rc genhtml_function_coverage=1 00:09:51.145 --rc genhtml_legend=1 00:09:51.145 --rc geninfo_all_blocks=1 00:09:51.145 --rc geninfo_unexecuted_blocks=1 00:09:51.145 00:09:51.145 ' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:51.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.145 --rc genhtml_branch_coverage=1 00:09:51.145 --rc genhtml_function_coverage=1 00:09:51.145 --rc genhtml_legend=1 00:09:51.145 --rc geninfo_all_blocks=1 00:09:51.145 --rc geninfo_unexecuted_blocks=1 00:09:51.145 00:09:51.145 ' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.145 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:51.145 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:51.146 Cannot find device "nvmf_init_br" 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:51.146 Cannot find device "nvmf_init_br2" 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:51.146 Cannot find device "nvmf_tgt_br" 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:51.146 Cannot find device "nvmf_tgt_br2" 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:51.146 Cannot find device "nvmf_init_br" 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:51.146 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:51.405 Cannot find device "nvmf_init_br2" 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:51.405 Cannot find device "nvmf_tgt_br" 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:51.405 Cannot find device "nvmf_tgt_br2" 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:51.405 Cannot find device "nvmf_br" 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:51.405 Cannot find device "nvmf_init_if" 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:51.405 Cannot find device "nvmf_init_if2" 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:51.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:51.405 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.405 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:51.405 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:51.406 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:51.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:51.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:09:51.666 00:09:51.666 --- 10.0.0.3 ping statistics --- 00:09:51.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.666 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:51.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:51.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:09:51.666 00:09:51.666 --- 10.0.0.4 ping statistics --- 00:09:51.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.666 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:51.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:51.666 00:09:51.666 --- 10.0.0.1 ping statistics --- 00:09:51.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.666 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:51.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:09:51.666 00:09:51.666 --- 10.0.0.2 ping statistics --- 00:09:51.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.666 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=80430 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 80430 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 80430 ']' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.666 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.666 [2024-11-20 17:30:33.374258] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:51.666 [2024-11-20 17:30:33.374349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.926 [2024-11-20 17:30:33.517494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.926 [2024-11-20 17:30:33.583903] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.926 [2024-11-20 17:30:33.583945] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.926 [2024-11-20 17:30:33.583955] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.926 [2024-11-20 17:30:33.583963] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.926 [2024-11-20 17:30:33.583969] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.926 [2024-11-20 17:30:33.584357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.926 [2024-11-20 17:30:33.584457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.926 [2024-11-20 17:30:33.585163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.926 [2024-11-20 17:30:33.585170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.926 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.186 [2024-11-20 17:30:33.751961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.186 Malloc0 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.186 [2024-11-20 17:30:33.826136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=80470 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=80472 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:52.186 { 00:09:52.186 "params": { 00:09:52.186 "name": "Nvme$subsystem", 00:09:52.186 "trtype": "$TEST_TRANSPORT", 00:09:52.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.186 "adrfam": "ipv4", 00:09:52.186 "trsvcid": "$NVMF_PORT", 00:09:52.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.186 "hdgst": ${hdgst:-false}, 00:09:52.186 "ddgst": ${ddgst:-false} 00:09:52.186 }, 00:09:52.186 "method": "bdev_nvme_attach_controller" 00:09:52.186 } 00:09:52.186 EOF 00:09:52.186 )") 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=80474 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:52.186 { 00:09:52.186 "params": { 00:09:52.186 "name": "Nvme$subsystem", 00:09:52.186 "trtype": "$TEST_TRANSPORT", 00:09:52.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.186 "adrfam": "ipv4", 00:09:52.186 "trsvcid": "$NVMF_PORT", 00:09:52.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.186 "hdgst": ${hdgst:-false}, 00:09:52.186 "ddgst": ${ddgst:-false} 00:09:52.186 }, 00:09:52.186 "method": "bdev_nvme_attach_controller" 00:09:52.186 } 00:09:52.186 EOF 00:09:52.186 )") 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=80477 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:52.186 { 00:09:52.186 "params": { 00:09:52.186 "name": "Nvme$subsystem", 00:09:52.186 "trtype": "$TEST_TRANSPORT", 00:09:52.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.186 "adrfam": "ipv4", 00:09:52.186 "trsvcid": "$NVMF_PORT", 00:09:52.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.186 "hdgst": ${hdgst:-false}, 00:09:52.186 "ddgst": ${ddgst:-false} 00:09:52.186 }, 00:09:52.186 "method": "bdev_nvme_attach_controller" 00:09:52.186 } 00:09:52.186 EOF 00:09:52.186 )") 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:52.186 "params": { 00:09:52.186 "name": "Nvme1", 00:09:52.186 "trtype": "tcp", 00:09:52.186 "traddr": "10.0.0.3", 00:09:52.186 "adrfam": "ipv4", 00:09:52.186 "trsvcid": "4420", 00:09:52.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.186 "hdgst": false, 00:09:52.186 "ddgst": false 00:09:52.186 }, 00:09:52.186 "method": "bdev_nvme_attach_controller" 00:09:52.186 }' 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:52.186 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:52.186 { 00:09:52.186 "params": { 00:09:52.186 "name": "Nvme$subsystem", 00:09:52.186 "trtype": "$TEST_TRANSPORT", 00:09:52.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.186 "adrfam": "ipv4", 00:09:52.186 "trsvcid": "$NVMF_PORT", 00:09:52.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.186 "hdgst": ${hdgst:-false}, 00:09:52.186 "ddgst": ${ddgst:-false} 00:09:52.186 }, 00:09:52.186 "method": "bdev_nvme_attach_controller" 00:09:52.186 } 00:09:52.187 EOF 00:09:52.187 )") 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:52.187 "params": { 00:09:52.187 "name": "Nvme1", 00:09:52.187 "trtype": "tcp", 00:09:52.187 "traddr": "10.0.0.3", 00:09:52.187 "adrfam": "ipv4", 00:09:52.187 "trsvcid": "4420", 00:09:52.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.187 "hdgst": false, 00:09:52.187 "ddgst": false 00:09:52.187 }, 00:09:52.187 "method": "bdev_nvme_attach_controller" 00:09:52.187 }' 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:52.187 "params": { 00:09:52.187 "name": "Nvme1", 00:09:52.187 "trtype": "tcp", 00:09:52.187 "traddr": "10.0.0.3", 00:09:52.187 "adrfam": "ipv4", 00:09:52.187 "trsvcid": "4420", 00:09:52.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.187 "hdgst": false, 00:09:52.187 "ddgst": false 00:09:52.187 }, 00:09:52.187 "method": "bdev_nvme_attach_controller" 00:09:52.187 }' 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:52.187 "params": { 00:09:52.187 "name": "Nvme1", 00:09:52.187 "trtype": "tcp", 00:09:52.187 "traddr": "10.0.0.3", 00:09:52.187 "adrfam": "ipv4", 00:09:52.187 "trsvcid": "4420", 00:09:52.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.187 "hdgst": false, 00:09:52.187 "ddgst": false 00:09:52.187 }, 00:09:52.187 "method": "bdev_nvme_attach_controller" 00:09:52.187 }' 00:09:52.187 [2024-11-20 17:30:33.903179] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.187 [2024-11-20 17:30:33.903267] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:52.187 [2024-11-20 17:30:33.908623] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.187 [2024-11-20 17:30:33.908691] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:52.187 [2024-11-20 17:30:33.920512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.187 [2024-11-20 17:30:33.920956] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:52.187 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 80470 00:09:52.187 [2024-11-20 17:30:33.930003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.187 [2024-11-20 17:30:33.930122] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:52.446 [2024-11-20 17:30:34.139268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.446 [2024-11-20 17:30:34.210230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.446 [2024-11-20 17:30:34.219832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:52.706 [2024-11-20 17:30:34.292907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:52.706 [2024-11-20 17:30:34.310861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.706 [2024-11-20 17:30:34.389604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.706 [2024-11-20 17:30:34.390013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:52.706 [2024-11-20 17:30:34.464297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:52.706 Running I/O for 1 seconds... 00:09:52.965 Running I/O for 1 seconds... 00:09:52.965 Running I/O for 1 seconds... 00:09:52.965 Running I/O for 1 seconds... 00:09:53.901 4736.00 IOPS, 18.50 MiB/s 00:09:53.901 Latency(us) 00:09:53.901 [2024-11-20T17:30:35.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.901 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:53.901 Nvme1n1 : 1.03 4744.63 18.53 0.00 0.00 26471.91 6047.19 44326.17 00:09:53.901 [2024-11-20T17:30:35.689Z] =================================================================================================================== 00:09:53.901 [2024-11-20T17:30:35.689Z] Total : 4744.63 18.53 0.00 0.00 26471.91 6047.19 44326.17 00:09:53.901 6279.00 IOPS, 24.53 MiB/s 00:09:53.901 Latency(us) 00:09:53.901 [2024-11-20T17:30:35.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.901 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:53.901 Nvme1n1 : 1.01 6324.38 24.70 0.00 0.00 20099.83 9889.98 30384.87 00:09:53.901 [2024-11-20T17:30:35.689Z] =================================================================================================================== 00:09:53.901 [2024-11-20T17:30:35.689Z] Total : 6324.38 24.70 0.00 0.00 20099.83 9889.98 30384.87 00:09:53.901 217536.00 IOPS, 849.75 MiB/s 00:09:53.901 Latency(us) 00:09:53.901 [2024-11-20T17:30:35.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.902 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:53.902 Nvme1n1 : 1.00 217166.13 848.31 0.00 0.00 586.10 277.41 1675.64 00:09:53.902 [2024-11-20T17:30:35.690Z] =================================================================================================================== 00:09:53.902 [2024-11-20T17:30:35.690Z] Total : 217166.13 848.31 0.00 0.00 586.10 277.41 1675.64 00:09:54.161 5561.00 IOPS, 21.72 MiB/s 00:09:54.161 Latency(us) 00:09:54.161 [2024-11-20T17:30:35.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.161 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:54.161 Nvme1n1 : 1.01 5668.38 22.14 0.00 0.00 22504.40 4944.99 59578.18 00:09:54.161 [2024-11-20T17:30:35.949Z] =================================================================================================================== 00:09:54.161 [2024-11-20T17:30:35.949Z] Total : 5668.38 22.14 0.00 0.00 22504.40 4944.99 59578.18 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 80472 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 80474 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 80477 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.161 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.161 rmmod nvme_tcp 00:09:54.420 rmmod nvme_fabrics 00:09:54.420 rmmod nvme_keyring 00:09:54.420 17:30:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 80430 ']' 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 80430 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 80430 ']' 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 80430 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80430 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.420 killing process with pid 80430 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80430' 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 80430 00:09:54.420 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 80430 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.678 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:54.938 00:09:54.938 real 0m3.872s 00:09:54.938 user 0m15.752s 00:09:54.938 sys 0m2.137s 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.938 ************************************ 00:09:54.938 END TEST nvmf_bdev_io_wait 00:09:54.938 ************************************ 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.938 ************************************ 00:09:54.938 START TEST nvmf_queue_depth 00:09:54.938 ************************************ 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:54.938 * Looking for test storage... 00:09:54.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.938 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:55.198 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:55.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.199 --rc genhtml_branch_coverage=1 00:09:55.199 --rc genhtml_function_coverage=1 00:09:55.199 --rc genhtml_legend=1 00:09:55.199 --rc geninfo_all_blocks=1 00:09:55.199 --rc geninfo_unexecuted_blocks=1 00:09:55.199 00:09:55.199 ' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:55.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.199 --rc genhtml_branch_coverage=1 00:09:55.199 --rc genhtml_function_coverage=1 00:09:55.199 --rc genhtml_legend=1 00:09:55.199 --rc geninfo_all_blocks=1 00:09:55.199 --rc geninfo_unexecuted_blocks=1 00:09:55.199 00:09:55.199 ' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:55.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.199 --rc genhtml_branch_coverage=1 00:09:55.199 --rc genhtml_function_coverage=1 00:09:55.199 --rc genhtml_legend=1 00:09:55.199 --rc geninfo_all_blocks=1 00:09:55.199 --rc geninfo_unexecuted_blocks=1 00:09:55.199 00:09:55.199 ' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:55.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.199 --rc genhtml_branch_coverage=1 00:09:55.199 --rc genhtml_function_coverage=1 00:09:55.199 --rc genhtml_legend=1 00:09:55.199 --rc geninfo_all_blocks=1 00:09:55.199 --rc geninfo_unexecuted_blocks=1 00:09:55.199 00:09:55.199 ' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.199 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:55.199 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:55.200 Cannot find device "nvmf_init_br" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:55.200 Cannot find device "nvmf_init_br2" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:55.200 Cannot find device "nvmf_tgt_br" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.200 Cannot find device "nvmf_tgt_br2" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:55.200 Cannot find device "nvmf_init_br" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:55.200 Cannot find device "nvmf_init_br2" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:55.200 Cannot find device "nvmf_tgt_br" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:55.200 Cannot find device "nvmf_tgt_br2" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:55.200 Cannot find device "nvmf_br" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:55.200 Cannot find device "nvmf_init_if" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:55.200 Cannot find device "nvmf_init_if2" 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.200 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.458 17:30:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:55.458 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:55.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:55.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:09:55.459 00:09:55.459 --- 10.0.0.3 ping statistics --- 00:09:55.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.459 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:55.459 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:55.459 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:09:55.459 00:09:55.459 --- 10.0.0.4 ping statistics --- 00:09:55.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.459 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:09:55.459 00:09:55.459 --- 10.0.0.1 ping statistics --- 00:09:55.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.459 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:55.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:55.459 00:09:55.459 --- 10.0.0.2 ping statistics --- 00:09:55.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.459 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=80737 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 80737 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 80737 ']' 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.459 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.718 [2024-11-20 17:30:37.285330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:55.718 [2024-11-20 17:30:37.285422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.718 [2024-11-20 17:30:37.434734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.976 [2024-11-20 17:30:37.523166] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.976 [2024-11-20 17:30:37.523240] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.976 [2024-11-20 17:30:37.523257] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.976 [2024-11-20 17:30:37.523268] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.976 [2024-11-20 17:30:37.523277] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.976 [2024-11-20 17:30:37.523314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.543 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.543 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:56.543 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:56.543 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.543 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 [2024-11-20 17:30:38.373596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 Malloc0 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 [2024-11-20 17:30:38.445997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:56.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=80787 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 80787 /var/tmp/bdevperf.sock 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 80787 ']' 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.802 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 [2024-11-20 17:30:38.514728] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:56.802 [2024-11-20 17:30:38.515001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80787 ] 00:09:57.062 [2024-11-20 17:30:38.658873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.062 [2024-11-20 17:30:38.731703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.321 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.321 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:57.321 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:57.321 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.321 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.321 NVMe0n1 00:09:57.321 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.321 17:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:57.321 Running I/O for 10 seconds... 00:09:59.635 9986.00 IOPS, 39.01 MiB/s [2024-11-20T17:30:42.359Z] 10229.50 IOPS, 39.96 MiB/s [2024-11-20T17:30:43.295Z] 10370.33 IOPS, 40.51 MiB/s [2024-11-20T17:30:44.231Z] 10498.25 IOPS, 41.01 MiB/s [2024-11-20T17:30:45.169Z] 10531.60 IOPS, 41.14 MiB/s [2024-11-20T17:30:46.105Z] 10618.17 IOPS, 41.48 MiB/s [2024-11-20T17:30:47.050Z] 10716.29 IOPS, 41.86 MiB/s [2024-11-20T17:30:48.028Z] 10809.12 IOPS, 42.22 MiB/s [2024-11-20T17:30:49.404Z] 10865.89 IOPS, 42.44 MiB/s [2024-11-20T17:30:49.404Z] 10943.00 IOPS, 42.75 MiB/s 00:10:07.616 Latency(us) 00:10:07.616 [2024-11-20T17:30:49.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.616 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:07.616 Verification LBA range: start 0x0 length 0x4000 00:10:07.616 NVMe0n1 : 10.07 10969.55 42.85 0.00 0.00 93011.31 23473.80 67680.81 00:10:07.616 [2024-11-20T17:30:49.404Z] =================================================================================================================== 00:10:07.616 [2024-11-20T17:30:49.404Z] Total : 10969.55 42.85 0.00 0.00 93011.31 23473.80 67680.81 00:10:07.616 { 00:10:07.616 "results": [ 00:10:07.616 { 00:10:07.616 "job": "NVMe0n1", 00:10:07.616 "core_mask": "0x1", 00:10:07.616 "workload": "verify", 00:10:07.616 "status": "finished", 00:10:07.616 "verify_range": { 00:10:07.616 "start": 0, 00:10:07.616 "length": 16384 00:10:07.616 }, 00:10:07.616 "queue_depth": 1024, 00:10:07.616 "io_size": 4096, 00:10:07.616 "runtime": 10.069144, 00:10:07.616 "iops": 10969.552128760895, 00:10:07.616 "mibps": 42.84981300297225, 00:10:07.616 "io_failed": 0, 00:10:07.616 "io_timeout": 0, 00:10:07.616 "avg_latency_us": 93011.3059213132, 00:10:07.616 "min_latency_us": 23473.803636363635, 00:10:07.616 "max_latency_us": 67680.81454545454 00:10:07.616 } 00:10:07.616 ], 00:10:07.616 "core_count": 1 00:10:07.616 } 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 80787 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 80787 ']' 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 80787 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80787 00:10:07.616 killing process with pid 80787 00:10:07.616 Received shutdown signal, test time was about 10.000000 seconds 00:10:07.616 00:10:07.616 Latency(us) 00:10:07.616 [2024-11-20T17:30:49.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.616 [2024-11-20T17:30:49.404Z] =================================================================================================================== 00:10:07.616 [2024-11-20T17:30:49.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.616 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80787' 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 80787 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 80787 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.617 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.617 rmmod nvme_tcp 00:10:07.875 rmmod nvme_fabrics 00:10:07.875 rmmod nvme_keyring 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 80737 ']' 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 80737 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 80737 ']' 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 80737 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80737 00:10:07.875 killing process with pid 80737 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80737' 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 80737 00:10:07.875 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 80737 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:08.133 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:08.392 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.392 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.392 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:08.392 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.392 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.392 17:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:08.392 00:10:08.392 real 0m13.468s 00:10:08.392 user 0m21.941s 00:10:08.392 sys 0m2.430s 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.392 ************************************ 00:10:08.392 END TEST nvmf_queue_depth 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.392 ************************************ 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.392 ************************************ 00:10:08.392 START TEST nvmf_target_multipath 00:10:08.392 ************************************ 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:08.392 * Looking for test storage... 00:10:08.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:08.392 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.652 --rc genhtml_branch_coverage=1 00:10:08.652 --rc genhtml_function_coverage=1 00:10:08.652 --rc genhtml_legend=1 00:10:08.652 --rc geninfo_all_blocks=1 00:10:08.652 --rc geninfo_unexecuted_blocks=1 00:10:08.652 00:10:08.652 ' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.652 --rc genhtml_branch_coverage=1 00:10:08.652 --rc genhtml_function_coverage=1 00:10:08.652 --rc genhtml_legend=1 00:10:08.652 --rc geninfo_all_blocks=1 00:10:08.652 --rc geninfo_unexecuted_blocks=1 00:10:08.652 00:10:08.652 ' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.652 --rc genhtml_branch_coverage=1 00:10:08.652 --rc genhtml_function_coverage=1 00:10:08.652 --rc genhtml_legend=1 00:10:08.652 --rc geninfo_all_blocks=1 00:10:08.652 --rc geninfo_unexecuted_blocks=1 00:10:08.652 00:10:08.652 ' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.652 --rc genhtml_branch_coverage=1 00:10:08.652 --rc genhtml_function_coverage=1 00:10:08.652 --rc genhtml_legend=1 00:10:08.652 --rc geninfo_all_blocks=1 00:10:08.652 --rc geninfo_unexecuted_blocks=1 00:10:08.652 00:10:08.652 ' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.652 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.653 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:08.653 Cannot find device "nvmf_init_br" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:08.653 Cannot find device "nvmf_init_br2" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:08.653 Cannot find device "nvmf_tgt_br" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.653 Cannot find device "nvmf_tgt_br2" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:08.653 Cannot find device "nvmf_init_br" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:08.653 Cannot find device "nvmf_init_br2" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:08.653 Cannot find device "nvmf_tgt_br" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:08.653 Cannot find device "nvmf_tgt_br2" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:08.653 Cannot find device "nvmf_br" 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:08.653 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:08.912 Cannot find device "nvmf_init_if" 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:08.912 Cannot find device "nvmf_init_if2" 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:08.912 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:08.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:10:08.913 00:10:08.913 --- 10.0.0.3 ping statistics --- 00:10:08.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.913 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:08.913 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:08.913 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:10:08.913 00:10:08.913 --- 10.0.0.4 ping statistics --- 00:10:08.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.913 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:08.913 00:10:08.913 --- 10.0.0.1 ping statistics --- 00:10:08.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.913 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:08.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:08.913 00:10:08.913 --- 10.0.0.2 ping statistics --- 00:10:08.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.913 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:08.913 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:09.172 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=81166 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 81166 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 81166 ']' 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.173 17:30:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.173 [2024-11-20 17:30:50.781241] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:09.173 [2024-11-20 17:30:50.782021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.173 [2024-11-20 17:30:50.925837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.432 [2024-11-20 17:30:51.008168] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.432 [2024-11-20 17:30:51.008225] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.432 [2024-11-20 17:30:51.008240] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.432 [2024-11-20 17:30:51.008251] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.432 [2024-11-20 17:30:51.008260] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.432 [2024-11-20 17:30:51.008358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.432 [2024-11-20 17:30:51.008484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.432 [2024-11-20 17:30:51.009749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.432 [2024-11-20 17:30:51.009754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.368 17:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.368 17:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:10:10.368 17:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:10.368 17:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:10.368 17:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.368 17:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.368 17:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:10.368 [2024-11-20 17:30:52.090909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.368 17:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:10.627 Malloc0 00:10:10.627 17:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:10.886 17:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.144 17:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.402 [2024-11-20 17:30:53.031956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.402 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:11.660 [2024-11-20 17:30:53.252200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:11.660 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:11.917 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:11.917 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.917 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:11.917 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.917 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:11.917 17:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:14.449 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:14.450 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:14.450 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:14.450 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:14.450 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:14.450 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=81309 00:10:14.450 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:14.450 17:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:14.450 [global] 00:10:14.450 thread=1 00:10:14.450 invalidate=1 00:10:14.450 rw=randrw 00:10:14.450 time_based=1 00:10:14.450 runtime=6 00:10:14.450 ioengine=libaio 00:10:14.450 direct=1 00:10:14.450 bs=4096 00:10:14.450 iodepth=128 00:10:14.450 norandommap=0 00:10:14.450 numjobs=1 00:10:14.450 00:10:14.450 verify_dump=1 00:10:14.450 verify_backlog=512 00:10:14.450 verify_state_save=0 00:10:14.450 do_verify=1 00:10:14.450 verify=crc32c-intel 00:10:14.450 [job0] 00:10:14.450 filename=/dev/nvme0n1 00:10:14.450 Could not set queue depth (nvme0n1) 00:10:14.450 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.450 fio-3.35 00:10:14.450 Starting 1 thread 00:10:15.017 17:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:15.276 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:15.535 17:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:16.911 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:16.911 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:16.911 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:16.911 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:16.911 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:17.170 17:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:18.105 17:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:18.105 17:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:18.105 17:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:18.105 17:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 81309 00:10:20.638 00:10:20.638 job0: (groupid=0, jobs=1): err= 0: pid=81330: Wed Nov 20 17:31:02 2024 00:10:20.638 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(273MiB/6004msec) 00:10:20.639 slat (usec): min=2, max=6094, avg=48.24, stdev=214.44 00:10:20.639 clat (usec): min=1039, max=15669, avg=7416.04, stdev=1156.02 00:10:20.639 lat (usec): min=1126, max=15679, avg=7464.28, stdev=1164.34 00:10:20.639 clat percentiles (usec): 00:10:20.639 | 1.00th=[ 4424], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6652], 00:10:20.639 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7308], 60.00th=[ 7570], 00:10:20.639 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[ 9503], 00:10:20.639 | 99.00th=[10945], 99.50th=[11469], 99.90th=[12649], 99.95th=[13435], 00:10:20.639 | 99.99th=[14353] 00:10:20.639 bw ( KiB/s): min= 9496, max=33392, per=53.52%, avg=24927.09, stdev=6700.94, samples=11 00:10:20.639 iops : min= 2374, max= 8348, avg=6231.73, stdev=1675.20, samples=11 00:10:20.639 write: IOPS=7054, BW=27.6MiB/s (28.9MB/s)(148MiB/5371msec); 0 zone resets 00:10:20.639 slat (usec): min=4, max=4905, avg=59.41, stdev=151.86 00:10:20.639 clat (usec): min=873, max=14249, avg=6413.31, stdev=995.29 00:10:20.639 lat (usec): min=1005, max=14273, avg=6472.71, stdev=1000.16 00:10:20.639 clat percentiles (usec): 00:10:20.639 | 1.00th=[ 3556], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 5800], 00:10:20.639 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6587], 00:10:20.639 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7373], 95.00th=[ 7767], 00:10:20.639 | 99.00th=[ 9372], 99.50th=[10683], 99.90th=[12649], 99.95th=[12780], 00:10:20.639 | 99.99th=[14222] 00:10:20.639 bw ( KiB/s): min= 9928, max=32648, per=88.36%, avg=24932.27, stdev=6303.85, samples=11 00:10:20.639 iops : min= 2482, max= 8162, avg=6233.00, stdev=1575.92, samples=11 00:10:20.639 lat (usec) : 1000=0.01% 00:10:20.639 lat (msec) : 2=0.01%, 4=1.03%, 10=96.84%, 20=2.11% 00:10:20.639 cpu : usr=6.16%, sys=23.90%, ctx=6864, majf=0, minf=102 00:10:20.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:10:20.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.639 issued rwts: total=69908,37888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.639 00:10:20.639 Run status group 0 (all jobs): 00:10:20.639 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=273MiB (286MB), run=6004-6004msec 00:10:20.639 WRITE: bw=27.6MiB/s (28.9MB/s), 27.6MiB/s-27.6MiB/s (28.9MB/s-28.9MB/s), io=148MiB (155MB), run=5371-5371msec 00:10:20.639 00:10:20.639 Disk stats (read/write): 00:10:20.639 nvme0n1: ios=68769/37267, merge=0/0, ticks=477439/222251, in_queue=699690, util=98.62% 00:10:20.639 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:20.639 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:21.207 17:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:22.142 17:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:22.142 17:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:22.142 17:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:22.142 17:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:22.142 17:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=81464 00:10:22.142 17:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:22.142 17:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:22.142 [global] 00:10:22.142 thread=1 00:10:22.142 invalidate=1 00:10:22.142 rw=randrw 00:10:22.142 time_based=1 00:10:22.142 runtime=6 00:10:22.142 ioengine=libaio 00:10:22.142 direct=1 00:10:22.142 bs=4096 00:10:22.142 iodepth=128 00:10:22.142 norandommap=0 00:10:22.142 numjobs=1 00:10:22.142 00:10:22.142 verify_dump=1 00:10:22.142 verify_backlog=512 00:10:22.142 verify_state_save=0 00:10:22.142 do_verify=1 00:10:22.142 verify=crc32c-intel 00:10:22.142 [job0] 00:10:22.142 filename=/dev/nvme0n1 00:10:22.142 Could not set queue depth (nvme0n1) 00:10:22.142 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.142 fio-3.35 00:10:22.142 Starting 1 thread 00:10:23.077 17:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:23.336 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:23.594 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:23.595 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:23.595 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:23.595 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:23.595 17:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:24.972 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:24.972 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:24.972 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:24.972 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:24.972 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:25.231 17:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:26.608 17:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:26.608 17:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:26.608 17:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:26.608 17:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 81464 00:10:28.515 00:10:28.515 job0: (groupid=0, jobs=1): err= 0: pid=81485: Wed Nov 20 17:31:10 2024 00:10:28.515 read: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(277MiB/6006msec) 00:10:28.515 slat (usec): min=4, max=7090, avg=41.52, stdev=196.93 00:10:28.515 clat (usec): min=242, max=21046, avg=7363.69, stdev=2234.50 00:10:28.515 lat (usec): min=265, max=21059, avg=7405.21, stdev=2240.30 00:10:28.515 clat percentiles (usec): 00:10:28.515 | 1.00th=[ 1811], 5.00th=[ 3195], 10.00th=[ 4752], 20.00th=[ 6259], 00:10:28.515 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7635], 00:10:28.515 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 9765], 95.00th=[11338], 00:10:28.515 | 99.00th=[14353], 99.50th=[15270], 99.90th=[17433], 99.95th=[17957], 00:10:28.515 | 99.99th=[19530] 00:10:28.515 bw ( KiB/s): min=13808, max=32680, per=53.85%, avg=25478.55, stdev=5541.42, samples=11 00:10:28.515 iops : min= 3452, max= 8170, avg=6369.64, stdev=1385.35, samples=11 00:10:28.515 write: IOPS=7067, BW=27.6MiB/s (28.9MB/s)(149MiB/5396msec); 0 zone resets 00:10:28.515 slat (usec): min=12, max=2286, avg=52.66, stdev=131.48 00:10:28.515 clat (usec): min=375, max=17468, avg=6277.91, stdev=2096.92 00:10:28.515 lat (usec): min=402, max=17493, avg=6330.57, stdev=2100.76 00:10:28.515 clat percentiles (usec): 00:10:28.515 | 1.00th=[ 1303], 5.00th=[ 2089], 10.00th=[ 3294], 20.00th=[ 5211], 00:10:28.515 | 30.00th=[ 5866], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6718], 00:10:28.515 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[ 8291], 95.00th=[10159], 00:10:28.515 | 99.00th=[12125], 99.50th=[13173], 99.90th=[14615], 99.95th=[15270], 00:10:28.515 | 99.99th=[16188] 00:10:28.515 bw ( KiB/s): min=14384, max=32432, per=90.05%, avg=25456.73, stdev=5192.68, samples=11 00:10:28.515 iops : min= 3596, max= 8108, avg=6364.18, stdev=1298.17, samples=11 00:10:28.515 lat (usec) : 250=0.01%, 500=0.04%, 750=0.10%, 1000=0.21% 00:10:28.516 lat (msec) : 2=2.11%, 4=7.28%, 10=82.45%, 20=7.82%, 50=0.01% 00:10:28.516 cpu : usr=6.36%, sys=24.80%, ctx=7615, majf=0, minf=108 00:10:28.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:28.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.516 issued rwts: total=71034,38134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.516 00:10:28.516 Run status group 0 (all jobs): 00:10:28.516 READ: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=277MiB (291MB), run=6006-6006msec 00:10:28.516 WRITE: bw=27.6MiB/s (28.9MB/s), 27.6MiB/s-27.6MiB/s (28.9MB/s-28.9MB/s), io=149MiB (156MB), run=5396-5396msec 00:10:28.516 00:10:28.516 Disk stats (read/write): 00:10:28.516 nvme0n1: ios=70001/37622, merge=0/0, ticks=480638/220195, in_queue=700833, util=98.63% 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:28.516 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.084 rmmod nvme_tcp 00:10:29.084 rmmod nvme_fabrics 00:10:29.084 rmmod nvme_keyring 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 81166 ']' 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 81166 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 81166 ']' 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 81166 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81166 00:10:29.084 killing process with pid 81166 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.084 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81166' 00:10:29.085 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 81166 00:10:29.085 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 81166 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:29.343 17:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:29.343 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:29.602 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.602 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.602 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:29.602 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.602 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.602 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.602 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:29.603 ************************************ 00:10:29.603 END TEST nvmf_target_multipath 00:10:29.603 ************************************ 00:10:29.603 00:10:29.603 real 0m21.135s 00:10:29.603 user 1m22.030s 00:10:29.603 sys 0m6.550s 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.603 ************************************ 00:10:29.603 START TEST nvmf_zcopy 00:10:29.603 ************************************ 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:29.603 * Looking for test storage... 00:10:29.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:29.603 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.862 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.863 --rc genhtml_branch_coverage=1 00:10:29.863 --rc genhtml_function_coverage=1 00:10:29.863 --rc genhtml_legend=1 00:10:29.863 --rc geninfo_all_blocks=1 00:10:29.863 --rc geninfo_unexecuted_blocks=1 00:10:29.863 00:10:29.863 ' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.863 --rc genhtml_branch_coverage=1 00:10:29.863 --rc genhtml_function_coverage=1 00:10:29.863 --rc genhtml_legend=1 00:10:29.863 --rc geninfo_all_blocks=1 00:10:29.863 --rc geninfo_unexecuted_blocks=1 00:10:29.863 00:10:29.863 ' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.863 --rc genhtml_branch_coverage=1 00:10:29.863 --rc genhtml_function_coverage=1 00:10:29.863 --rc genhtml_legend=1 00:10:29.863 --rc geninfo_all_blocks=1 00:10:29.863 --rc geninfo_unexecuted_blocks=1 00:10:29.863 00:10:29.863 ' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.863 --rc genhtml_branch_coverage=1 00:10:29.863 --rc genhtml_function_coverage=1 00:10:29.863 --rc genhtml_legend=1 00:10:29.863 --rc geninfo_all_blocks=1 00:10:29.863 --rc geninfo_unexecuted_blocks=1 00:10:29.863 00:10:29.863 ' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.863 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:29.863 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:29.864 Cannot find device "nvmf_init_br" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:29.864 Cannot find device "nvmf_init_br2" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:29.864 Cannot find device "nvmf_tgt_br" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.864 Cannot find device "nvmf_tgt_br2" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:29.864 Cannot find device "nvmf_init_br" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:29.864 Cannot find device "nvmf_init_br2" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:29.864 Cannot find device "nvmf_tgt_br" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:29.864 Cannot find device "nvmf_tgt_br2" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:29.864 Cannot find device "nvmf_br" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:29.864 Cannot find device "nvmf_init_if" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:29.864 Cannot find device "nvmf_init_if2" 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:29.864 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:30.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:30.123 00:10:30.123 --- 10.0.0.3 ping statistics --- 00:10:30.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.123 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:30.123 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:30.123 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:10:30.123 00:10:30.123 --- 10.0.0.4 ping statistics --- 00:10:30.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.123 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:30.123 00:10:30.123 --- 10.0.0.1 ping statistics --- 00:10:30.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.123 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:30.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:10:30.123 00:10:30.123 --- 10.0.0.2 ping statistics --- 00:10:30.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.123 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.123 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:30.124 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:30.124 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.124 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:30.124 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=81825 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 81825 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 81825 ']' 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.382 17:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.382 [2024-11-20 17:31:12.002588] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:30.382 [2024-11-20 17:31:12.002676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.382 [2024-11-20 17:31:12.140256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.641 [2024-11-20 17:31:12.229113] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.641 [2024-11-20 17:31:12.229174] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.641 [2024-11-20 17:31:12.229184] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.641 [2024-11-20 17:31:12.229194] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.641 [2024-11-20 17:31:12.229201] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.641 [2024-11-20 17:31:12.229231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.208 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.208 [2024-11-20 17:31:12.992021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.466 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.466 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:31.466 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.466 17:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.466 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.466 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:31.466 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.466 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.466 [2024-11-20 17:31:13.008209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 malloc0 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:31.467 { 00:10:31.467 "params": { 00:10:31.467 "name": "Nvme$subsystem", 00:10:31.467 "trtype": "$TEST_TRANSPORT", 00:10:31.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:31.467 "adrfam": "ipv4", 00:10:31.467 "trsvcid": "$NVMF_PORT", 00:10:31.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:31.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:31.467 "hdgst": ${hdgst:-false}, 00:10:31.467 "ddgst": ${ddgst:-false} 00:10:31.467 }, 00:10:31.467 "method": "bdev_nvme_attach_controller" 00:10:31.467 } 00:10:31.467 EOF 00:10:31.467 )") 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:31.467 17:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:31.467 "params": { 00:10:31.467 "name": "Nvme1", 00:10:31.467 "trtype": "tcp", 00:10:31.467 "traddr": "10.0.0.3", 00:10:31.467 "adrfam": "ipv4", 00:10:31.467 "trsvcid": "4420", 00:10:31.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:31.467 "hdgst": false, 00:10:31.467 "ddgst": false 00:10:31.467 }, 00:10:31.467 "method": "bdev_nvme_attach_controller" 00:10:31.467 }' 00:10:31.467 [2024-11-20 17:31:13.128525] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:31.467 [2024-11-20 17:31:13.128616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81876 ] 00:10:31.725 [2024-11-20 17:31:13.271665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.725 [2024-11-20 17:31:13.337759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.984 Running I/O for 10 seconds... 00:10:33.854 6573.00 IOPS, 51.35 MiB/s [2024-11-20T17:31:16.608Z] 6685.50 IOPS, 52.23 MiB/s [2024-11-20T17:31:17.543Z] 6689.33 IOPS, 52.26 MiB/s [2024-11-20T17:31:18.921Z] 6709.50 IOPS, 52.42 MiB/s [2024-11-20T17:31:19.857Z] 6721.80 IOPS, 52.51 MiB/s [2024-11-20T17:31:20.792Z] 6728.83 IOPS, 52.57 MiB/s [2024-11-20T17:31:21.729Z] 6717.29 IOPS, 52.48 MiB/s [2024-11-20T17:31:22.664Z] 6715.00 IOPS, 52.46 MiB/s [2024-11-20T17:31:23.601Z] 6715.11 IOPS, 52.46 MiB/s [2024-11-20T17:31:23.601Z] 6718.90 IOPS, 52.49 MiB/s 00:10:41.813 Latency(us) 00:10:41.813 [2024-11-20T17:31:23.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.813 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:41.813 Verification LBA range: start 0x0 length 0x1000 00:10:41.813 Nvme1n1 : 10.01 6719.91 52.50 0.00 0.00 18991.41 469.18 31933.91 00:10:41.813 [2024-11-20T17:31:23.601Z] =================================================================================================================== 00:10:41.813 [2024-11-20T17:31:23.601Z] Total : 6719.91 52.50 0.00 0.00 18991.41 469.18 31933.91 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=81993 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:42.073 { 00:10:42.073 "params": { 00:10:42.073 "name": "Nvme$subsystem", 00:10:42.073 "trtype": "$TEST_TRANSPORT", 00:10:42.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.073 "adrfam": "ipv4", 00:10:42.073 "trsvcid": "$NVMF_PORT", 00:10:42.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.073 "hdgst": ${hdgst:-false}, 00:10:42.073 "ddgst": ${ddgst:-false} 00:10:42.073 }, 00:10:42.073 "method": "bdev_nvme_attach_controller" 00:10:42.073 } 00:10:42.073 EOF 00:10:42.073 )") 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:42.073 [2024-11-20 17:31:23.738532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.738579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:42.073 17:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:42.073 "params": { 00:10:42.073 "name": "Nvme1", 00:10:42.073 "trtype": "tcp", 00:10:42.073 "traddr": "10.0.0.3", 00:10:42.073 "adrfam": "ipv4", 00:10:42.073 "trsvcid": "4420", 00:10:42.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.073 "hdgst": false, 00:10:42.073 "ddgst": false 00:10:42.073 }, 00:10:42.073 "method": "bdev_nvme_attach_controller" 00:10:42.073 }' 00:10:42.073 [2024-11-20 17:31:23.750502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.750530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.762506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.762558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.770499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.770525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.778497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.778526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.786503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.786527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.794505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.794530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.802500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.802525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.810496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.810519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 [2024-11-20 17:31:23.810944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:42.073 [2024-11-20 17:31:23.811033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81993 ] 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.818505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.818530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.826507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.826532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.834518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.834545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.846515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.846540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.073 [2024-11-20 17:31:23.854516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.073 [2024-11-20 17:31:23.854540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.073 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.862533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.862560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.870518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.870545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.882524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.882743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.890528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.890572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.902529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.902557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.914531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.914558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.926535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.926673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.938543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.938735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.950545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.950625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.335 [2024-11-20 17:31:23.950748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.962548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.962745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.974550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.974684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.986568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.986789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:23.998571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:23.998602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:24.010568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:24.010595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:24.016893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.335 [2024-11-20 17:31:24.022571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:24.022598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.335 [2024-11-20 17:31:24.030572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.335 [2024-11-20 17:31:24.030786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.335 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.038584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.038767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.046575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.046757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.058581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.058770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.070584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.070615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.078577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.078605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.090584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.090766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.098591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.098773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.336 [2024-11-20 17:31:24.110592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.336 [2024-11-20 17:31:24.110773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.336 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.596 [2024-11-20 17:31:24.122673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.596 [2024-11-20 17:31:24.122864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.596 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.596 [2024-11-20 17:31:24.134656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.596 [2024-11-20 17:31:24.134859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.596 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.596 [2024-11-20 17:31:24.146678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.596 [2024-11-20 17:31:24.146735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.596 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.596 [2024-11-20 17:31:24.158740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.596 [2024-11-20 17:31:24.158778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.596 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.596 [2024-11-20 17:31:24.170692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.596 [2024-11-20 17:31:24.170749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.596 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.596 [2024-11-20 17:31:24.178686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.596 [2024-11-20 17:31:24.178878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.596 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.596 [2024-11-20 17:31:24.190733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.596 [2024-11-20 17:31:24.190921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.596 Running I/O for 5 seconds... 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.198727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.198904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.216275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.216428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.233299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.233459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.249619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.249658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.266503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.266537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.283034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.283085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.299200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.299353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.316717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.316865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.331507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.331656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.346867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.347044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.365387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.365547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.597 [2024-11-20 17:31:24.379908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.597 [2024-11-20 17:31:24.380059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.597 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.390115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.857 [2024-11-20 17:31:24.390288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.857 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.405011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.857 [2024-11-20 17:31:24.405201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.857 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.415378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.857 [2024-11-20 17:31:24.415527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.857 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.429619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.857 [2024-11-20 17:31:24.429655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.857 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.440317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.857 [2024-11-20 17:31:24.440354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.857 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.452295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.857 [2024-11-20 17:31:24.452470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.857 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.466577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.857 [2024-11-20 17:31:24.466779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.857 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.857 [2024-11-20 17:31:24.477371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.477545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.492101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.492286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.508467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.508621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.518253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.518444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.533487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.533522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.552225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.552258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.566642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.566676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.582008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.582203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.599216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.599376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.615508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.615659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:42.858 [2024-11-20 17:31:24.632309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.858 [2024-11-20 17:31:24.632461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.858 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.116 [2024-11-20 17:31:24.649038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.116 [2024-11-20 17:31:24.649220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.116 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.663159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.663335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.678784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.678950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.696259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.696294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.713472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.713505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.729467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.729634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.746469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.746642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.762695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.762935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.780397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.780572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.797464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.797627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.808805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.808970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.823843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.824008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.840336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.840376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.850641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.850680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.865187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.865376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.880748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.880911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.117 [2024-11-20 17:31:24.896418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.117 [2024-11-20 17:31:24.896571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.117 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.377 [2024-11-20 17:31:24.906448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.377 [2024-11-20 17:31:24.906610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.377 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.377 [2024-11-20 17:31:24.922282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.377 [2024-11-20 17:31:24.922435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.377 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.377 [2024-11-20 17:31:24.937630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.377 [2024-11-20 17:31:24.937668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.377 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.377 [2024-11-20 17:31:24.953313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.377 [2024-11-20 17:31:24.953349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.377 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.377 [2024-11-20 17:31:24.970786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.377 [2024-11-20 17:31:24.970954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.377 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.377 [2024-11-20 17:31:24.985788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.377 [2024-11-20 17:31:24.985990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:24.996476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:24.996629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.011368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.011530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.021602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.021760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.035902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.036104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.047041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.047277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.062040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.062132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.078995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.079248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.095552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.095721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.109704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.109867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.125224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.125397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.141532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.141570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.378 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.378 [2024-11-20 17:31:25.158371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.378 [2024-11-20 17:31:25.158408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.637 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.637 [2024-11-20 17:31:25.173248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.637 [2024-11-20 17:31:25.173285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.637 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.637 [2024-11-20 17:31:25.191736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.637 [2024-11-20 17:31:25.191899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.637 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.637 11733.00 IOPS, 91.66 MiB/s [2024-11-20T17:31:25.425Z] [2024-11-20 17:31:25.206391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.637 [2024-11-20 17:31:25.206550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.637 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.218027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.218245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.233984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.234178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.250978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.251194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.266004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.266188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.281487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.281528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.297787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.297824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.314341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.314512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.331861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.332013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.346839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.347109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.357221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.357392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.367943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.368145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.380862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.381036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.397260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.397432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.412653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.412805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.638 [2024-11-20 17:31:25.423568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.638 [2024-11-20 17:31:25.423606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.438401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.438565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.455359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.455397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.471580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.471753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.482524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.482765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.493716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.493897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.510027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.510271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.520874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.521034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.532284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.532456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.546995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.547268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.562571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.562612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.899 [2024-11-20 17:31:25.572604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.899 [2024-11-20 17:31:25.572641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.899 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.587454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.587508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.598128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.598163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.609447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.609486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.619911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.619949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.636913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.636950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.654154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.654321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.663719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.663868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:43.900 [2024-11-20 17:31:25.677794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.900 [2024-11-20 17:31:25.677956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.900 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.693475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.693623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.711727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.711875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.725661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.725809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.740905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.741056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.750189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.750226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.761660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.761695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.779405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.779444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.789600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.789786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.804081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.804300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.819785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.819950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.161 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.161 [2024-11-20 17:31:25.829680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.161 [2024-11-20 17:31:25.829827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.843817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.843981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.859930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.860089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.877373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.877429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.887796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.887832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.903060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.903141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.918210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.918381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.929762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.929911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.162 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.162 [2024-11-20 17:31:25.946699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.162 [2024-11-20 17:31:25.946922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:25.956453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:25.956600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:25.970263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:25.970425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:25.980106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:25.980254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:25.994723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:25.994889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.011001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.011246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.020830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.020995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.035361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.035545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.045833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.046004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.060782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.060993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.077671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.077911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.093357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.093605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.109725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.109975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.125844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.125901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.136808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.137175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.422 [2024-11-20 17:31:26.151483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.422 [2024-11-20 17:31:26.151716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.422 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.423 [2024-11-20 17:31:26.167351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.423 [2024-11-20 17:31:26.167593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.423 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.423 [2024-11-20 17:31:26.179291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.423 [2024-11-20 17:31:26.179526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.423 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.423 [2024-11-20 17:31:26.194186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.423 [2024-11-20 17:31:26.194417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.423 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.423 11747.50 IOPS, 91.78 MiB/s [2024-11-20T17:31:26.211Z] [2024-11-20 17:31:26.204963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.423 [2024-11-20 17:31:26.205022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.423 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.219484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.219556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.230162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.230226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.245312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.245385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.256128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.256197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.271438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.271516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.286641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.286743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.302303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.302347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.319224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.319277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.336021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.336058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.352382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.352421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.368988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.369026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.378842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.378883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.390328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.390366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.402383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.402437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.417434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.417470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.427699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.427734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.441744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.441779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.683 [2024-11-20 17:31:26.457666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.683 [2024-11-20 17:31:26.457703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.683 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.943 [2024-11-20 17:31:26.474397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.943 [2024-11-20 17:31:26.474433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.943 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.943 [2024-11-20 17:31:26.490577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.490610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.508827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.508865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.523722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.523759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.534285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.534348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.545014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.545051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.555812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.555848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.570628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.570684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.587205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.587260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.603511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.603547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.621454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.621506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.631697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.631734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.646282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.646343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.664127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.664181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.679058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.679141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.694385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.694422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.709644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.709682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.944 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:44.944 [2024-11-20 17:31:26.726223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.944 [2024-11-20 17:31:26.726263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.743778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.743814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.760625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.760663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.775185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.775221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.792347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.792383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.806979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.807053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.816263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.816300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.829044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.829146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.844672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.844709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.860816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.860854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.871570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.871607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.881757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.881794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.892364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.892402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.908370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.908420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.924336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.924373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.940866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.940902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.958676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.958721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.974114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.974161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.204 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.204 [2024-11-20 17:31:26.989939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.204 [2024-11-20 17:31:26.989988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.006683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.006762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.017215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.017251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.027430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.027466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.041728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.041764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.051032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.051092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.067223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.067258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.085261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.085298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.100081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.100117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.116530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.116569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.133744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.133782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.149570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.149606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.166876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.166915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.183263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.183311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 11823.67 IOPS, 92.37 MiB/s [2024-11-20T17:31:27.252Z] [2024-11-20 17:31:27.200124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.200160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.464 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.464 [2024-11-20 17:31:27.216513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.464 [2024-11-20 17:31:27.216549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.465 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.465 [2024-11-20 17:31:27.235188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.465 [2024-11-20 17:31:27.235224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.465 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.465 [2024-11-20 17:31:27.248631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.465 [2024-11-20 17:31:27.248669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.724 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.724 [2024-11-20 17:31:27.264183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.724 [2024-11-20 17:31:27.264220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.724 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.724 [2024-11-20 17:31:27.280690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.724 [2024-11-20 17:31:27.280726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.724 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.724 [2024-11-20 17:31:27.297197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.724 [2024-11-20 17:31:27.297234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.724 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.724 [2024-11-20 17:31:27.314262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.724 [2024-11-20 17:31:27.314299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.724 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.724 [2024-11-20 17:31:27.330174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.724 [2024-11-20 17:31:27.330210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.724 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.347592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.347628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.362500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.362536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.377332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.377369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.393671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.393708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.409603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.409638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.427039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.427088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.442234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.442270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.457984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.458021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.475457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.475492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.491589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.491626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.725 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.725 [2024-11-20 17:31:27.509261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.725 [2024-11-20 17:31:27.509310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.523898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.523936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.538426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.538462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.554518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.554555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.570049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.570098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.580071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.580117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.590685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.590741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.601205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.601243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.618801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.618839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.634812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.634850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.984 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.984 [2024-11-20 17:31:27.652223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.984 [2024-11-20 17:31:27.652262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.662841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.662879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.673991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.674030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.686359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.686412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.704306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.704343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.718805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.718845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.727741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.727778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.738908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.738947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.750837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.750892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.985 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.985 [2024-11-20 17:31:27.769359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.985 [2024-11-20 17:31:27.769395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.783645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.783683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.800797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.800834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.817471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.817508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.833376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.833412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.849701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.849754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.865484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.865517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.876295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.876333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.887737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.887771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.244 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.244 [2024-11-20 17:31:27.900299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.244 [2024-11-20 17:31:27.900338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:27.918252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:27.918289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:27.932687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:27.932723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:27.942533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:27.942589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:27.953505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:27.953558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:27.968373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:27.968409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:27.984059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:27.984107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:28.000683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:28.000722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.245 [2024-11-20 17:31:28.017559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.245 [2024-11-20 17:31:28.017597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.245 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.503 [2024-11-20 17:31:28.031914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.503 [2024-11-20 17:31:28.031952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.503 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.503 [2024-11-20 17:31:28.041617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.503 [2024-11-20 17:31:28.041654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.503 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.503 [2024-11-20 17:31:28.052522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.503 [2024-11-20 17:31:28.052558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.503 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.503 [2024-11-20 17:31:28.062781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.503 [2024-11-20 17:31:28.062820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.073358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.073396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.083944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.083980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.098198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.098236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.115799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.115835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.131947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.131984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.148028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.148075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.164509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.164546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.180497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.180534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 11959.50 IOPS, 93.43 MiB/s [2024-11-20T17:31:28.292Z] [2024-11-20 17:31:28.198652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.198688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.212966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.213003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.230190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.230228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.240627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.240664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.254882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.254921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.265822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.265858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.504 [2024-11-20 17:31:28.282802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.504 [2024-11-20 17:31:28.282839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.504 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.297542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.763 [2024-11-20 17:31:28.297578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.763 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.314615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.763 [2024-11-20 17:31:28.314651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.763 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.329323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.763 [2024-11-20 17:31:28.329361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.763 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.343527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.763 [2024-11-20 17:31:28.343564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.763 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.357403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.763 [2024-11-20 17:31:28.357472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.763 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.366987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.763 [2024-11-20 17:31:28.367041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.763 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.377695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.763 [2024-11-20 17:31:28.377732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.763 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.763 [2024-11-20 17:31:28.394927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.394964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.411392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.411429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.428702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.428738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.445040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.445090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.462043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.462090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.478340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.478376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.494402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.494438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.511712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.511748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.525749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.525787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.764 [2024-11-20 17:31:28.541528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.764 [2024-11-20 17:31:28.541564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.764 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.558366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.558402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.573692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.573728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.583875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.583928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.597812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.597865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.614200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.614253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.629579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.629615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.645994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.646050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.661325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.661362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.671124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.671160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.684881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.684917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.701903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.701944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.717137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.717178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.732794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.732831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.750328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.750382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.765056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.765121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.779187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.779225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.793846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.793882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.023 [2024-11-20 17:31:28.803274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.023 [2024-11-20 17:31:28.803327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.023 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.816438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.816474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.830414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.830452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.846283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.846321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.862193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.862246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.878841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.878880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.895064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.895129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.905905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.905941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.919855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.919892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.935889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.935926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.952250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.952288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.968400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.968436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:28.984836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:28.984874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:29.001698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:29.001735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:29.018124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:29.018160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:29.035668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:29.035705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:29.050926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:29.050963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.283 [2024-11-20 17:31:29.065692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.283 [2024-11-20 17:31:29.065742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.283 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.079782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.079820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.095669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.095706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.112458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.112508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.129427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.129480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.144111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.144144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.160091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.160124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.176547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.176584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.192359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.192396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 12037.40 IOPS, 94.04 MiB/s [2024-11-20T17:31:29.341Z] 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 00:10:47.553 Latency(us) 00:10:47.553 [2024-11-20T17:31:29.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.553 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:47.553 Nvme1n1 : 5.01 12046.10 94.11 0.00 0.00 10616.20 4170.47 19779.96 00:10:47.553 [2024-11-20T17:31:29.341Z] =================================================================================================================== 00:10:47.553 [2024-11-20T17:31:29.341Z] Total : 12046.10 94.11 0.00 0.00 10616.20 4170.47 19779.96 00:10:47.553 [2024-11-20 17:31:29.204247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.204284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.216253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.216288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.228243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.228290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.240244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.240292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.252245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.252292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.264261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.264311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.276254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.276301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.288256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.288292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.300258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.300306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.553 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.553 [2024-11-20 17:31:29.312265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.553 [2024-11-20 17:31:29.312296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.554 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.554 [2024-11-20 17:31:29.324265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.554 [2024-11-20 17:31:29.324312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.554 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.554 [2024-11-20 17:31:29.336273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.554 [2024-11-20 17:31:29.336320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.813 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.813 [2024-11-20 17:31:29.348289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.813 [2024-11-20 17:31:29.348345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.813 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.813 [2024-11-20 17:31:29.360284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.813 [2024-11-20 17:31:29.360314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.813 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.813 [2024-11-20 17:31:29.372291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.813 [2024-11-20 17:31:29.372324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.813 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.813 [2024-11-20 17:31:29.384290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.813 [2024-11-20 17:31:29.384324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.813 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.813 [2024-11-20 17:31:29.396311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.813 [2024-11-20 17:31:29.396343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.813 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.813 [2024-11-20 17:31:29.408292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.813 [2024-11-20 17:31:29.408331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.813 2024/11/20 17:31:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.813 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (81993) - No such process 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 81993 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.813 delay0 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.813 17:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:48.071 [2024-11-20 17:31:29.612984] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:54.636 Initializing NVMe Controllers 00:10:54.636 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.636 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:54.636 Initialization complete. Launching workers. 00:10:54.636 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 90 00:10:54.636 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 377, failed to submit 33 00:10:54.636 success 186, unsuccessful 191, failed 0 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.636 rmmod nvme_tcp 00:10:54.636 rmmod nvme_fabrics 00:10:54.636 rmmod nvme_keyring 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 81825 ']' 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 81825 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 81825 ']' 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 81825 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81825 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81825' 00:10:54.636 killing process with pid 81825 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 81825 00:10:54.636 17:31:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 81825 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:54.636 00:10:54.636 real 0m25.041s 00:10:54.636 user 0m38.276s 00:10:54.636 sys 0m7.899s 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.636 ************************************ 00:10:54.636 END TEST nvmf_zcopy 00:10:54.636 ************************************ 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.636 17:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.637 17:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.637 ************************************ 00:10:54.637 START TEST nvmf_nmic 00:10:54.637 ************************************ 00:10:54.637 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:54.896 * Looking for test storage... 00:10:54.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 00:10:54.896 ' 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 00:10:54.896 ' 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 00:10:54.896 ' 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 00:10:54.896 ' 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.896 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:54.897 Cannot find device "nvmf_init_br" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:54.897 Cannot find device "nvmf_init_br2" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:54.897 Cannot find device "nvmf_tgt_br" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.897 Cannot find device "nvmf_tgt_br2" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:54.897 Cannot find device "nvmf_init_br" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:54.897 Cannot find device "nvmf_init_br2" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:54.897 Cannot find device "nvmf_tgt_br" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:54.897 Cannot find device "nvmf_tgt_br2" 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:54.897 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:55.156 Cannot find device "nvmf_br" 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:55.156 Cannot find device "nvmf_init_if" 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:55.156 Cannot find device "nvmf_init_if2" 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:55.156 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:55.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:55.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:55.415 00:10:55.415 --- 10.0.0.3 ping statistics --- 00:10:55.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.415 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:55.415 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:55.415 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:55.415 00:10:55.415 --- 10.0.0.4 ping statistics --- 00:10:55.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.415 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:55.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:55.415 00:10:55.415 --- 10.0.0.1 ping statistics --- 00:10:55.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.415 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:55.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:55.415 00:10:55.415 --- 10.0.0.2 ping statistics --- 00:10:55.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.415 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:55.415 17:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=82372 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 82372 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 82372 ']' 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.415 17:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 [2024-11-20 17:31:37.075015] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:55.415 [2024-11-20 17:31:37.075122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.674 [2024-11-20 17:31:37.219972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.674 [2024-11-20 17:31:37.309714] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.674 [2024-11-20 17:31:37.309781] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.674 [2024-11-20 17:31:37.309795] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.674 [2024-11-20 17:31:37.309805] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.674 [2024-11-20 17:31:37.309814] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.674 [2024-11-20 17:31:37.310700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.674 [2024-11-20 17:31:37.310916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.674 [2024-11-20 17:31:37.310988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.674 [2024-11-20 17:31:37.311002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.610 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.610 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:56.610 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:56.610 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.610 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.610 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 [2024-11-20 17:31:38.131347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 Malloc0 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 [2024-11-20 17:31:38.185872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:56.611 test case1: single bdev can't be used in multiple subsystems 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 [2024-11-20 17:31:38.209709] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:56.611 [2024-11-20 17:31:38.209746] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:56.611 [2024-11-20 17:31:38.209755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.611 2024/11/20 17:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.611 request: 00:10:56.611 { 00:10:56.611 "method": "nvmf_subsystem_add_ns", 00:10:56.611 "params": { 00:10:56.611 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:56.611 "namespace": { 00:10:56.611 "bdev_name": "Malloc0", 00:10:56.611 "no_auto_visible": false 00:10:56.611 } 00:10:56.611 } 00:10:56.611 } 00:10:56.611 Got JSON-RPC error response 00:10:56.611 GoRPCClient: error on JSON-RPC call 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:56.611 Adding namespace failed - expected result. 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:56.611 test case2: host connect to nvmf target in multiple paths 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.611 [2024-11-20 17:31:38.225880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:56.870 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:56.870 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.870 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:56.870 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.870 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:56.870 17:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:59.402 17:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:59.402 17:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:59.402 17:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.402 17:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:59.402 17:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.402 17:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:59.402 17:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:59.402 [global] 00:10:59.402 thread=1 00:10:59.402 invalidate=1 00:10:59.402 rw=write 00:10:59.402 time_based=1 00:10:59.402 runtime=1 00:10:59.402 ioengine=libaio 00:10:59.402 direct=1 00:10:59.402 bs=4096 00:10:59.402 iodepth=1 00:10:59.402 norandommap=0 00:10:59.402 numjobs=1 00:10:59.402 00:10:59.402 verify_dump=1 00:10:59.402 verify_backlog=512 00:10:59.402 verify_state_save=0 00:10:59.402 do_verify=1 00:10:59.402 verify=crc32c-intel 00:10:59.402 [job0] 00:10:59.402 filename=/dev/nvme0n1 00:10:59.402 Could not set queue depth (nvme0n1) 00:10:59.402 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.402 fio-3.35 00:10:59.402 Starting 1 thread 00:11:00.338 00:11:00.338 job0: (groupid=0, jobs=1): err= 0: pid=82487: Wed Nov 20 17:31:41 2024 00:11:00.338 read: IOPS=3290, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:11:00.338 slat (nsec): min=13479, max=75193, avg=16949.90, stdev=5363.62 00:11:00.338 clat (usec): min=113, max=505, avg=144.28, stdev=20.16 00:11:00.338 lat (usec): min=127, max=527, avg=161.23, stdev=20.86 00:11:00.338 clat percentiles (usec): 00:11:00.338 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:11:00.338 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:11:00.338 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 178], 00:11:00.338 | 99.00th=[ 200], 99.50th=[ 217], 99.90th=[ 306], 99.95th=[ 498], 00:11:00.338 | 99.99th=[ 506] 00:11:00.338 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:00.338 slat (usec): min=19, max=110, avg=24.30, stdev= 7.65 00:11:00.338 clat (usec): min=73, max=346, avg=103.19, stdev=19.13 00:11:00.338 lat (usec): min=98, max=371, avg=127.49, stdev=21.44 00:11:00.338 clat percentiles (usec): 00:11:00.338 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 91], 00:11:00.338 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 102], 00:11:00.338 | 70.00th=[ 106], 80.00th=[ 113], 90.00th=[ 124], 95.00th=[ 133], 00:11:00.338 | 99.00th=[ 165], 99.50th=[ 217], 99.90th=[ 297], 99.95th=[ 314], 00:11:00.338 | 99.99th=[ 347] 00:11:00.338 bw ( KiB/s): min=15008, max=15008, per=100.00%, avg=15008.00, stdev= 0.00, samples=1 00:11:00.338 iops : min= 3752, max= 3752, avg=3752.00, stdev= 0.00, samples=1 00:11:00.338 lat (usec) : 100=29.05%, 250=70.63%, 500=0.31%, 750=0.01% 00:11:00.338 cpu : usr=2.10%, sys=10.80%, ctx=6878, majf=0, minf=5 00:11:00.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.338 issued rwts: total=3294,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.339 00:11:00.339 Run status group 0 (all jobs): 00:11:00.339 READ: bw=12.9MiB/s (13.5MB/s), 12.9MiB/s-12.9MiB/s (13.5MB/s-13.5MB/s), io=12.9MiB (13.5MB), run=1001-1001msec 00:11:00.339 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:11:00.339 00:11:00.339 Disk stats (read/write): 00:11:00.339 nvme0n1: ios=3099/3072, merge=0/0, ticks=506/380, in_queue=886, util=91.38% 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:00.339 17:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.339 rmmod nvme_tcp 00:11:00.339 rmmod nvme_fabrics 00:11:00.339 rmmod nvme_keyring 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 82372 ']' 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 82372 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 82372 ']' 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 82372 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.339 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82372 00:11:00.598 killing process with pid 82372 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82372' 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 82372 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 82372 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.598 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:00.857 00:11:00.857 real 0m6.247s 00:11:00.857 user 0m20.043s 00:11:00.857 sys 0m1.504s 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.857 ************************************ 00:11:00.857 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.857 END TEST nvmf_nmic 00:11:00.857 ************************************ 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.118 ************************************ 00:11:01.118 START TEST nvmf_fio_target 00:11:01.118 ************************************ 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.118 * Looking for test storage... 00:11:01.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.118 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:01.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.118 --rc genhtml_branch_coverage=1 00:11:01.118 --rc genhtml_function_coverage=1 00:11:01.118 --rc genhtml_legend=1 00:11:01.118 --rc geninfo_all_blocks=1 00:11:01.118 --rc geninfo_unexecuted_blocks=1 00:11:01.118 00:11:01.118 ' 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:01.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.119 --rc genhtml_branch_coverage=1 00:11:01.119 --rc genhtml_function_coverage=1 00:11:01.119 --rc genhtml_legend=1 00:11:01.119 --rc geninfo_all_blocks=1 00:11:01.119 --rc geninfo_unexecuted_blocks=1 00:11:01.119 00:11:01.119 ' 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:01.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.119 --rc genhtml_branch_coverage=1 00:11:01.119 --rc genhtml_function_coverage=1 00:11:01.119 --rc genhtml_legend=1 00:11:01.119 --rc geninfo_all_blocks=1 00:11:01.119 --rc geninfo_unexecuted_blocks=1 00:11:01.119 00:11:01.119 ' 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:01.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.119 --rc genhtml_branch_coverage=1 00:11:01.119 --rc genhtml_function_coverage=1 00:11:01.119 --rc genhtml_legend=1 00:11:01.119 --rc geninfo_all_blocks=1 00:11:01.119 --rc geninfo_unexecuted_blocks=1 00:11:01.119 00:11:01.119 ' 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.119 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.119 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.120 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.120 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.120 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.120 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.120 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.120 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:01.379 Cannot find device "nvmf_init_br" 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:01.379 Cannot find device "nvmf_init_br2" 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:01.379 Cannot find device "nvmf_tgt_br" 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.379 Cannot find device "nvmf_tgt_br2" 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:01.379 Cannot find device "nvmf_init_br" 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:01.379 Cannot find device "nvmf_init_br2" 00:11:01.379 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:01.380 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:01.380 Cannot find device "nvmf_tgt_br" 00:11:01.380 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:01.380 17:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:01.380 Cannot find device "nvmf_tgt_br2" 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:01.380 Cannot find device "nvmf_br" 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:01.380 Cannot find device "nvmf_init_if" 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:01.380 Cannot find device "nvmf_init_if2" 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:01.380 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:11:01.641 00:11:01.641 --- 10.0.0.3 ping statistics --- 00:11:01.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.641 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.641 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.641 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:11:01.641 00:11:01.641 --- 10.0.0.4 ping statistics --- 00:11:01.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.641 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:01.641 00:11:01.641 --- 10.0.0.1 ping statistics --- 00:11:01.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.641 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:11:01.641 00:11:01.641 --- 10.0.0.2 ping statistics --- 00:11:01.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.641 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=82722 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 82722 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 82722 ']' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.641 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.641 [2024-11-20 17:31:43.425669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:01.641 [2024-11-20 17:31:43.425967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.900 [2024-11-20 17:31:43.562528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.900 [2024-11-20 17:31:43.631412] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.900 [2024-11-20 17:31:43.631602] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.900 [2024-11-20 17:31:43.631689] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.900 [2024-11-20 17:31:43.631769] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.900 [2024-11-20 17:31:43.631839] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.900 [2024-11-20 17:31:43.632129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.900 [2024-11-20 17:31:43.632338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.900 [2024-11-20 17:31:43.632852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.900 [2024-11-20 17:31:43.632867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.158 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.158 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:02.158 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:02.158 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.158 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.158 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.158 17:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:02.416 [2024-11-20 17:31:44.128571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.416 17:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.984 17:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:02.984 17:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.984 17:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:02.984 17:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.243 17:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:03.243 17:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.502 17:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:03.502 17:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:03.760 17:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.328 17:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:04.328 17:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.328 17:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:04.328 17:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.895 17:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:04.895 17:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:05.153 17:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.411 17:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:05.411 17:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.669 17:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:05.669 17:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.928 17:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:05.928 [2024-11-20 17:31:47.708210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:06.186 17:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:06.186 17:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:06.753 17:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:06.753 17:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:06.753 17:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.753 17:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.753 17:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:06.753 17:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:06.753 17:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:09.283 17:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:09.283 17:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:09.283 17:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.283 17:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:09.283 17:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.283 17:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:09.283 17:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:09.283 [global] 00:11:09.283 thread=1 00:11:09.283 invalidate=1 00:11:09.283 rw=write 00:11:09.283 time_based=1 00:11:09.283 runtime=1 00:11:09.283 ioengine=libaio 00:11:09.283 direct=1 00:11:09.283 bs=4096 00:11:09.283 iodepth=1 00:11:09.283 norandommap=0 00:11:09.283 numjobs=1 00:11:09.283 00:11:09.283 verify_dump=1 00:11:09.283 verify_backlog=512 00:11:09.283 verify_state_save=0 00:11:09.283 do_verify=1 00:11:09.283 verify=crc32c-intel 00:11:09.283 [job0] 00:11:09.283 filename=/dev/nvme0n1 00:11:09.283 [job1] 00:11:09.283 filename=/dev/nvme0n2 00:11:09.283 [job2] 00:11:09.283 filename=/dev/nvme0n3 00:11:09.283 [job3] 00:11:09.283 filename=/dev/nvme0n4 00:11:09.283 Could not set queue depth (nvme0n1) 00:11:09.283 Could not set queue depth (nvme0n2) 00:11:09.283 Could not set queue depth (nvme0n3) 00:11:09.283 Could not set queue depth (nvme0n4) 00:11:09.283 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.283 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.283 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.283 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.283 fio-3.35 00:11:09.283 Starting 4 threads 00:11:10.219 00:11:10.219 job0: (groupid=0, jobs=1): err= 0: pid=83006: Wed Nov 20 17:31:51 2024 00:11:10.219 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:10.219 slat (nsec): min=11365, max=61338, avg=16120.18, stdev=5617.28 00:11:10.219 clat (usec): min=143, max=365, avg=242.46, stdev=36.60 00:11:10.219 lat (usec): min=155, max=382, avg=258.58, stdev=37.85 00:11:10.219 clat percentiles (usec): 00:11:10.219 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 210], 00:11:10.219 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 249], 00:11:10.219 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 306], 00:11:10.219 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 367], 00:11:10.219 | 99.99th=[ 367] 00:11:10.219 write: IOPS=2052, BW=8212KiB/s (8409kB/s)(8220KiB/1001msec); 0 zone resets 00:11:10.219 slat (usec): min=16, max=110, avg=24.92, stdev= 8.17 00:11:10.219 clat (usec): min=102, max=4612, avg=200.15, stdev=164.59 00:11:10.219 lat (usec): min=121, max=4629, avg=225.07, stdev=164.87 00:11:10.219 clat percentiles (usec): 00:11:10.219 | 1.00th=[ 124], 5.00th=[ 139], 10.00th=[ 151], 20.00th=[ 161], 00:11:10.219 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 198], 00:11:10.219 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 258], 00:11:10.219 | 99.00th=[ 306], 99.50th=[ 375], 99.90th=[ 3818], 99.95th=[ 3916], 00:11:10.219 | 99.99th=[ 4621] 00:11:10.219 bw ( KiB/s): min= 8192, max= 8192, per=28.64%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.219 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.219 lat (usec) : 250=76.68%, 500=23.18%, 1000=0.02% 00:11:10.219 lat (msec) : 2=0.02%, 4=0.07%, 10=0.02% 00:11:10.219 cpu : usr=1.70%, sys=6.50%, ctx=4103, majf=0, minf=5 00:11:10.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.219 issued rwts: total=2048,2055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.219 job1: (groupid=0, jobs=1): err= 0: pid=83007: Wed Nov 20 17:31:51 2024 00:11:10.219 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:10.219 slat (nsec): min=10554, max=65320, avg=15458.15, stdev=6191.50 00:11:10.219 clat (usec): min=242, max=755, avg=415.76, stdev=63.06 00:11:10.219 lat (usec): min=254, max=771, avg=431.22, stdev=62.97 00:11:10.219 clat percentiles (usec): 00:11:10.219 | 1.00th=[ 293], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 363], 00:11:10.219 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 424], 00:11:10.219 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 490], 95.00th=[ 529], 00:11:10.219 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 701], 99.95th=[ 758], 00:11:10.219 | 99.99th=[ 758] 00:11:10.219 write: IOPS=1525, BW=6102KiB/s (6248kB/s)(6108KiB/1001msec); 0 zone resets 00:11:10.219 slat (usec): min=11, max=115, avg=26.15, stdev= 9.92 00:11:10.219 clat (usec): min=150, max=654, avg=336.58, stdev=71.39 00:11:10.219 lat (usec): min=174, max=670, avg=362.73, stdev=70.95 00:11:10.219 clat percentiles (usec): 00:11:10.219 | 1.00th=[ 165], 5.00th=[ 221], 10.00th=[ 253], 20.00th=[ 285], 00:11:10.219 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 347], 00:11:10.219 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 424], 95.00th=[ 457], 00:11:10.219 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 644], 99.95th=[ 652], 00:11:10.219 | 99.99th=[ 652] 00:11:10.219 bw ( KiB/s): min= 6104, max= 6104, per=21.34%, avg=6104.00, stdev= 0.00, samples=1 00:11:10.219 iops : min= 1526, max= 1526, avg=1526.00, stdev= 0.00, samples=1 00:11:10.219 lat (usec) : 250=5.57%, 500=89.77%, 750=4.63%, 1000=0.04% 00:11:10.219 cpu : usr=1.00%, sys=4.20%, ctx=2551, majf=0, minf=14 00:11:10.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.219 issued rwts: total=1024,1527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.219 job2: (groupid=0, jobs=1): err= 0: pid=83008: Wed Nov 20 17:31:51 2024 00:11:10.220 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:10.220 slat (nsec): min=10595, max=68688, avg=16282.78, stdev=6581.63 00:11:10.220 clat (usec): min=228, max=763, avg=414.83, stdev=62.79 00:11:10.220 lat (usec): min=241, max=785, avg=431.11, stdev=63.87 00:11:10.220 clat percentiles (usec): 00:11:10.220 | 1.00th=[ 297], 5.00th=[ 322], 10.00th=[ 343], 20.00th=[ 367], 00:11:10.220 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 424], 00:11:10.220 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 494], 95.00th=[ 529], 00:11:10.220 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 709], 99.95th=[ 766], 00:11:10.220 | 99.99th=[ 766] 00:11:10.220 write: IOPS=1526, BW=6106KiB/s (6252kB/s)(6112KiB/1001msec); 0 zone resets 00:11:10.220 slat (nsec): min=10923, max=78785, avg=25811.29, stdev=9331.66 00:11:10.220 clat (usec): min=152, max=794, avg=336.94, stdev=68.21 00:11:10.220 lat (usec): min=173, max=816, avg=362.75, stdev=67.97 00:11:10.220 clat percentiles (usec): 00:11:10.220 | 1.00th=[ 172], 5.00th=[ 233], 10.00th=[ 258], 20.00th=[ 285], 00:11:10.220 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 351], 00:11:10.220 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 449], 00:11:10.220 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 685], 99.95th=[ 791], 00:11:10.220 | 99.99th=[ 791] 00:11:10.220 bw ( KiB/s): min= 6104, max= 6104, per=21.34%, avg=6104.00, stdev= 0.00, samples=1 00:11:10.220 iops : min= 1526, max= 1526, avg=1526.00, stdev= 0.00, samples=1 00:11:10.220 lat (usec) : 250=5.25%, 500=90.32%, 750=4.35%, 1000=0.08% 00:11:10.220 cpu : usr=1.00%, sys=4.20%, ctx=2553, majf=0, minf=11 00:11:10.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.220 issued rwts: total=1024,1528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.220 job3: (groupid=0, jobs=1): err= 0: pid=83009: Wed Nov 20 17:31:51 2024 00:11:10.220 read: IOPS=1948, BW=7792KiB/s (7979kB/s)(7800KiB/1001msec) 00:11:10.220 slat (nsec): min=13646, max=74642, avg=20585.36, stdev=7666.54 00:11:10.220 clat (usec): min=154, max=361, avg=245.74, stdev=35.52 00:11:10.220 lat (usec): min=183, max=383, avg=266.33, stdev=35.76 00:11:10.220 clat percentiles (usec): 00:11:10.220 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 215], 00:11:10.220 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 253], 00:11:10.220 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 310], 00:11:10.220 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 363], 00:11:10.220 | 99.99th=[ 363] 00:11:10.220 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:10.220 slat (usec): min=18, max=109, avg=30.29, stdev=10.13 00:11:10.220 clat (usec): min=111, max=3131, avg=200.15, stdev=87.24 00:11:10.220 lat (usec): min=143, max=3170, avg=230.44, stdev=87.90 00:11:10.220 clat percentiles (usec): 00:11:10.220 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 167], 00:11:10.220 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 196], 60.00th=[ 204], 00:11:10.220 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 247], 95.00th=[ 260], 00:11:10.220 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[ 523], 99.95th=[ 2311], 00:11:10.220 | 99.99th=[ 3130] 00:11:10.220 bw ( KiB/s): min= 8192, max= 8192, per=28.64%, avg=8192.00, stdev= 0.00, samples=1 00:11:10.220 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:10.220 lat (usec) : 250=74.64%, 500=25.29%, 750=0.03% 00:11:10.220 lat (msec) : 4=0.05% 00:11:10.220 cpu : usr=1.60%, sys=7.80%, ctx=4003, majf=0, minf=9 00:11:10.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.220 issued rwts: total=1950,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.220 00:11:10.220 Run status group 0 (all jobs): 00:11:10.220 READ: bw=23.6MiB/s (24.7MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=23.6MiB (24.8MB), run=1001-1001msec 00:11:10.220 WRITE: bw=27.9MiB/s (29.3MB/s), 6102KiB/s-8212KiB/s (6248kB/s-8409kB/s), io=28.0MiB (29.3MB), run=1001-1001msec 00:11:10.220 00:11:10.220 Disk stats (read/write): 00:11:10.220 nvme0n1: ios=1585/1939, merge=0/0, ticks=442/414, in_queue=856, util=87.05% 00:11:10.220 nvme0n2: ios=1051/1104, merge=0/0, ticks=459/393, in_queue=852, util=88.06% 00:11:10.220 nvme0n3: ios=1024/1103, merge=0/0, ticks=436/394, in_queue=830, util=89.12% 00:11:10.220 nvme0n4: ios=1536/1871, merge=0/0, ticks=412/420, in_queue=832, util=89.68% 00:11:10.220 17:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:10.220 [global] 00:11:10.220 thread=1 00:11:10.220 invalidate=1 00:11:10.220 rw=randwrite 00:11:10.220 time_based=1 00:11:10.220 runtime=1 00:11:10.220 ioengine=libaio 00:11:10.220 direct=1 00:11:10.220 bs=4096 00:11:10.220 iodepth=1 00:11:10.220 norandommap=0 00:11:10.220 numjobs=1 00:11:10.220 00:11:10.220 verify_dump=1 00:11:10.220 verify_backlog=512 00:11:10.220 verify_state_save=0 00:11:10.220 do_verify=1 00:11:10.220 verify=crc32c-intel 00:11:10.220 [job0] 00:11:10.220 filename=/dev/nvme0n1 00:11:10.220 [job1] 00:11:10.220 filename=/dev/nvme0n2 00:11:10.220 [job2] 00:11:10.220 filename=/dev/nvme0n3 00:11:10.220 [job3] 00:11:10.220 filename=/dev/nvme0n4 00:11:10.220 Could not set queue depth (nvme0n1) 00:11:10.220 Could not set queue depth (nvme0n2) 00:11:10.220 Could not set queue depth (nvme0n3) 00:11:10.220 Could not set queue depth (nvme0n4) 00:11:10.479 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.479 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.479 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.479 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.479 fio-3.35 00:11:10.479 Starting 4 threads 00:11:11.890 00:11:11.890 job0: (groupid=0, jobs=1): err= 0: pid=83062: Wed Nov 20 17:31:53 2024 00:11:11.890 read: IOPS=1222, BW=4891KiB/s (5008kB/s)(4896KiB/1001msec) 00:11:11.890 slat (nsec): min=9942, max=91171, avg=18558.35, stdev=7586.35 00:11:11.890 clat (usec): min=247, max=5150, avg=411.93, stdev=153.18 00:11:11.890 lat (usec): min=273, max=5216, avg=430.48, stdev=154.70 00:11:11.890 clat percentiles (usec): 00:11:11.890 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 334], 20.00th=[ 359], 00:11:11.891 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 412], 00:11:11.891 | 70.00th=[ 437], 80.00th=[ 457], 90.00th=[ 486], 95.00th=[ 519], 00:11:11.891 | 99.00th=[ 627], 99.50th=[ 750], 99.90th=[ 1287], 99.95th=[ 5145], 00:11:11.891 | 99.99th=[ 5145] 00:11:11.891 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:11.891 slat (nsec): min=13821, max=96087, avg=24407.27, stdev=8217.74 00:11:11.891 clat (usec): min=135, max=564, avg=279.54, stdev=55.28 00:11:11.891 lat (usec): min=155, max=600, avg=303.94, stdev=57.02 00:11:11.891 clat percentiles (usec): 00:11:11.891 | 1.00th=[ 180], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 237], 00:11:11.891 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:11:11.891 | 70.00th=[ 293], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[ 392], 00:11:11.891 | 99.00th=[ 441], 99.50th=[ 490], 99.90th=[ 545], 99.95th=[ 562], 00:11:11.891 | 99.99th=[ 562] 00:11:11.891 bw ( KiB/s): min= 6576, max= 6576, per=21.62%, avg=6576.00, stdev= 0.00, samples=1 00:11:11.891 iops : min= 1644, max= 1644, avg=1644.00, stdev= 0.00, samples=1 00:11:11.891 lat (usec) : 250=17.97%, 500=78.73%, 750=3.12%, 1000=0.11% 00:11:11.891 lat (msec) : 2=0.04%, 10=0.04% 00:11:11.891 cpu : usr=1.70%, sys=4.60%, ctx=2762, majf=0, minf=13 00:11:11.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 issued rwts: total=1224,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.891 job1: (groupid=0, jobs=1): err= 0: pid=83063: Wed Nov 20 17:31:53 2024 00:11:11.891 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:11.891 slat (nsec): min=11921, max=63182, avg=15770.86, stdev=4679.00 00:11:11.891 clat (usec): min=129, max=2788, avg=182.66, stdev=89.88 00:11:11.891 lat (usec): min=145, max=2811, avg=198.43, stdev=91.35 00:11:11.891 clat percentiles (usec): 00:11:11.891 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:11:11.891 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:11:11.891 | 70.00th=[ 178], 80.00th=[ 194], 90.00th=[ 258], 95.00th=[ 285], 00:11:11.891 | 99.00th=[ 326], 99.50th=[ 371], 99.90th=[ 1483], 99.95th=[ 2638], 00:11:11.891 | 99.99th=[ 2802] 00:11:11.891 write: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:11:11.891 slat (usec): min=16, max=131, avg=22.66, stdev= 7.47 00:11:11.891 clat (usec): min=88, max=747, avg=137.33, stdev=42.72 00:11:11.891 lat (usec): min=113, max=773, avg=159.98, stdev=46.81 00:11:11.891 clat percentiles (usec): 00:11:11.891 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:11:11.891 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 128], 00:11:11.891 | 70.00th=[ 135], 80.00th=[ 151], 90.00th=[ 208], 95.00th=[ 233], 00:11:11.891 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 586], 00:11:11.891 | 99.99th=[ 750] 00:11:11.891 bw ( KiB/s): min=10872, max=10872, per=35.74%, avg=10872.00, stdev= 0.00, samples=1 00:11:11.891 iops : min= 2718, max= 2718, avg=2718.00, stdev= 0.00, samples=1 00:11:11.891 lat (usec) : 100=0.93%, 250=92.49%, 500=6.42%, 750=0.09% 00:11:11.891 lat (msec) : 2=0.04%, 4=0.04% 00:11:11.891 cpu : usr=2.10%, sys=8.40%, ctx=5566, majf=0, minf=5 00:11:11.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 issued rwts: total=2560,3005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.891 job2: (groupid=0, jobs=1): err= 0: pid=83064: Wed Nov 20 17:31:53 2024 00:11:11.891 read: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec) 00:11:11.891 slat (nsec): min=10746, max=85117, avg=19698.07, stdev=8049.50 00:11:11.891 clat (usec): min=155, max=2389, avg=383.40, stdev=125.17 00:11:11.891 lat (usec): min=174, max=2411, avg=403.10, stdev=124.27 00:11:11.891 clat percentiles (usec): 00:11:11.891 | 1.00th=[ 186], 5.00th=[ 219], 10.00th=[ 237], 20.00th=[ 265], 00:11:11.891 | 30.00th=[ 297], 40.00th=[ 355], 50.00th=[ 383], 60.00th=[ 404], 00:11:11.891 | 70.00th=[ 469], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 562], 00:11:11.891 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 807], 99.95th=[ 2376], 00:11:11.891 | 99.99th=[ 2376] 00:11:11.891 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:11.891 slat (usec): min=12, max=134, avg=26.55, stdev=10.82 00:11:11.891 clat (usec): min=130, max=557, avg=250.56, stdev=60.77 00:11:11.891 lat (usec): min=156, max=583, avg=277.11, stdev=60.01 00:11:11.891 clat percentiles (usec): 00:11:11.891 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 196], 00:11:11.891 | 30.00th=[ 217], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 262], 00:11:11.891 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 359], 00:11:11.891 | 99.00th=[ 412], 99.50th=[ 474], 99.90th=[ 515], 99.95th=[ 562], 00:11:11.891 | 99.99th=[ 562] 00:11:11.891 bw ( KiB/s): min= 8192, max= 8192, per=26.93%, avg=8192.00, stdev= 0.00, samples=1 00:11:11.891 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:11.891 lat (usec) : 250=34.85%, 500=54.89%, 750=10.19%, 1000=0.03% 00:11:11.891 lat (msec) : 4=0.03% 00:11:11.891 cpu : usr=1.30%, sys=5.60%, ctx=2955, majf=0, minf=13 00:11:11.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 issued rwts: total=1408,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.891 job3: (groupid=0, jobs=1): err= 0: pid=83065: Wed Nov 20 17:31:53 2024 00:11:11.891 read: IOPS=1277, BW=5111KiB/s (5234kB/s)(5116KiB/1001msec) 00:11:11.891 slat (usec): min=11, max=213, avg=22.16, stdev=10.00 00:11:11.891 clat (usec): min=192, max=5204, avg=390.11, stdev=150.39 00:11:11.891 lat (usec): min=206, max=5231, avg=412.27, stdev=151.22 00:11:11.891 clat percentiles (usec): 00:11:11.891 | 1.00th=[ 253], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 343], 00:11:11.891 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 392], 00:11:11.891 | 70.00th=[ 416], 80.00th=[ 433], 90.00th=[ 457], 95.00th=[ 482], 00:11:11.891 | 99.00th=[ 578], 99.50th=[ 652], 99.90th=[ 1336], 99.95th=[ 5211], 00:11:11.891 | 99.99th=[ 5211] 00:11:11.891 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:11.891 slat (usec): min=14, max=124, avg=31.23, stdev= 7.37 00:11:11.891 clat (usec): min=151, max=519, avg=271.74, stdev=59.69 00:11:11.891 lat (usec): min=190, max=544, avg=302.97, stdev=58.32 00:11:11.891 clat percentiles (usec): 00:11:11.891 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 219], 00:11:11.891 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:11:11.891 | 70.00th=[ 289], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 388], 00:11:11.891 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 502], 99.95th=[ 519], 00:11:11.891 | 99.99th=[ 519] 00:11:11.891 bw ( KiB/s): min= 6552, max= 6552, per=21.54%, avg=6552.00, stdev= 0.00, samples=1 00:11:11.891 iops : min= 1638, max= 1638, avg=1638.00, stdev= 0.00, samples=1 00:11:11.891 lat (usec) : 250=20.36%, 500=78.37%, 750=1.14%, 1000=0.07% 00:11:11.891 lat (msec) : 2=0.04%, 10=0.04% 00:11:11.891 cpu : usr=1.90%, sys=5.60%, ctx=2820, majf=0, minf=15 00:11:11.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.891 issued rwts: total=1279,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.891 00:11:11.891 Run status group 0 (all jobs): 00:11:11.891 READ: bw=25.3MiB/s (26.5MB/s), 4891KiB/s-9.99MiB/s (5008kB/s-10.5MB/s), io=25.3MiB (26.5MB), run=1001-1001msec 00:11:11.891 WRITE: bw=29.7MiB/s (31.2MB/s), 6138KiB/s-11.7MiB/s (6285kB/s-12.3MB/s), io=29.7MiB (31.2MB), run=1001-1001msec 00:11:11.891 00:11:11.891 Disk stats (read/write): 00:11:11.891 nvme0n1: ios=1074/1377, merge=0/0, ticks=449/400, in_queue=849, util=89.38% 00:11:11.891 nvme0n2: ios=2236/2560, merge=0/0, ticks=445/394, in_queue=839, util=90.31% 00:11:11.891 nvme0n3: ios=1149/1536, merge=0/0, ticks=453/382, in_queue=835, util=89.98% 00:11:11.891 nvme0n4: ios=1024/1429, merge=0/0, ticks=413/419, in_queue=832, util=89.43% 00:11:11.891 17:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:11.891 [global] 00:11:11.891 thread=1 00:11:11.891 invalidate=1 00:11:11.891 rw=write 00:11:11.891 time_based=1 00:11:11.891 runtime=1 00:11:11.891 ioengine=libaio 00:11:11.891 direct=1 00:11:11.891 bs=4096 00:11:11.891 iodepth=128 00:11:11.891 norandommap=0 00:11:11.891 numjobs=1 00:11:11.891 00:11:11.891 verify_dump=1 00:11:11.891 verify_backlog=512 00:11:11.891 verify_state_save=0 00:11:11.891 do_verify=1 00:11:11.891 verify=crc32c-intel 00:11:11.891 [job0] 00:11:11.891 filename=/dev/nvme0n1 00:11:11.891 [job1] 00:11:11.891 filename=/dev/nvme0n2 00:11:11.891 [job2] 00:11:11.891 filename=/dev/nvme0n3 00:11:11.891 [job3] 00:11:11.891 filename=/dev/nvme0n4 00:11:11.891 Could not set queue depth (nvme0n1) 00:11:11.891 Could not set queue depth (nvme0n2) 00:11:11.891 Could not set queue depth (nvme0n3) 00:11:11.891 Could not set queue depth (nvme0n4) 00:11:11.891 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.891 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.891 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.891 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.891 fio-3.35 00:11:11.891 Starting 4 threads 00:11:13.269 00:11:13.269 job0: (groupid=0, jobs=1): err= 0: pid=83132: Wed Nov 20 17:31:54 2024 00:11:13.269 read: IOPS=1915, BW=7661KiB/s (7845kB/s)(7684KiB/1003msec) 00:11:13.269 slat (usec): min=3, max=7928, avg=251.57, stdev=940.91 00:11:13.269 clat (usec): min=474, max=43727, avg=30530.42, stdev=7174.64 00:11:13.269 lat (usec): min=4231, max=43744, avg=30781.99, stdev=7183.17 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[ 4621], 5.00th=[15664], 10.00th=[20841], 20.00th=[24773], 00:11:13.269 | 30.00th=[30278], 40.00th=[31851], 50.00th=[32375], 60.00th=[33162], 00:11:13.269 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36963], 95.00th=[38011], 00:11:13.269 | 99.00th=[40633], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:11:13.269 | 99.99th=[43779] 00:11:13.269 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:11:13.269 slat (usec): min=9, max=9666, avg=243.44, stdev=845.00 00:11:13.269 clat (usec): min=18473, max=44741, avg=32686.77, stdev=6627.20 00:11:13.269 lat (usec): min=19030, max=44798, avg=32930.22, stdev=6619.71 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[19006], 5.00th=[19792], 10.00th=[20317], 20.00th=[26608], 00:11:13.269 | 30.00th=[31851], 40.00th=[33817], 50.00th=[34866], 60.00th=[35914], 00:11:13.269 | 70.00th=[36963], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:11:13.269 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:11:13.269 | 99.99th=[44827] 00:11:13.269 bw ( KiB/s): min= 8192, max= 8192, per=17.75%, avg=8192.00, stdev= 0.00, samples=2 00:11:13.269 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:13.269 lat (usec) : 500=0.03% 00:11:13.269 lat (msec) : 10=1.59%, 20=4.61%, 50=93.78% 00:11:13.269 cpu : usr=2.79%, sys=6.59%, ctx=688, majf=0, minf=13 00:11:13.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:13.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.269 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.269 job1: (groupid=0, jobs=1): err= 0: pid=83133: Wed Nov 20 17:31:54 2024 00:11:13.269 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:11:13.269 slat (usec): min=5, max=7919, avg=132.68, stdev=622.23 00:11:13.269 clat (usec): min=9657, max=25433, avg=16452.37, stdev=2461.40 00:11:13.269 lat (usec): min=9683, max=25470, avg=16585.05, stdev=2511.87 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[10421], 5.00th=[12125], 10.00th=[13173], 20.00th=[15139], 00:11:13.269 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16188], 60.00th=[16450], 00:11:13.269 | 70.00th=[16909], 80.00th=[17695], 90.00th=[20055], 95.00th=[21103], 00:11:13.269 | 99.00th=[22676], 99.50th=[23462], 99.90th=[25297], 99.95th=[25297], 00:11:13.269 | 99.99th=[25560] 00:11:13.269 write: IOPS=3983, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1006msec); 0 zone resets 00:11:13.269 slat (usec): min=11, max=6509, avg=121.88, stdev=379.42 00:11:13.269 clat (usec): min=4697, max=25265, avg=16978.54, stdev=2429.66 00:11:13.269 lat (usec): min=5409, max=25378, avg=17100.42, stdev=2441.75 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[ 9765], 5.00th=[12256], 10.00th=[13960], 20.00th=[15795], 00:11:13.269 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:11:13.269 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[20579], 00:11:13.269 | 99.00th=[23462], 99.50th=[24511], 99.90th=[25035], 99.95th=[25035], 00:11:13.269 | 99.99th=[25297] 00:11:13.269 bw ( KiB/s): min=14656, max=16416, per=33.67%, avg=15536.00, stdev=1244.51, samples=2 00:11:13.269 iops : min= 3664, max= 4104, avg=3884.00, stdev=311.13, samples=2 00:11:13.269 lat (msec) : 10=0.92%, 20=91.13%, 50=7.94% 00:11:13.269 cpu : usr=3.48%, sys=13.23%, ctx=635, majf=0, minf=14 00:11:13.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:13.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.269 issued rwts: total=3584,4007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.269 job2: (groupid=0, jobs=1): err= 0: pid=83134: Wed Nov 20 17:31:54 2024 00:11:13.269 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:13.269 slat (usec): min=4, max=7083, avg=148.13, stdev=833.13 00:11:13.269 clat (usec): min=12689, max=29485, avg=19593.98, stdev=1950.59 00:11:13.269 lat (usec): min=12715, max=29561, avg=19742.11, stdev=2099.68 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[16909], 5.00th=[16909], 10.00th=[17171], 20.00th=[17695], 00:11:13.269 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:11:13.269 | 70.00th=[20317], 80.00th=[21103], 90.00th=[22414], 95.00th=[22938], 00:11:13.269 | 99.00th=[25035], 99.50th=[25822], 99.90th=[28967], 99.95th=[29230], 00:11:13.269 | 99.99th=[29492] 00:11:13.269 write: IOPS=3497, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec); 0 zone resets 00:11:13.269 slat (usec): min=14, max=7598, avg=147.39, stdev=786.29 00:11:13.269 clat (usec): min=654, max=29356, avg=18815.98, stdev=2703.14 00:11:13.269 lat (usec): min=6756, max=29444, avg=18963.37, stdev=2713.17 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[ 7898], 5.00th=[14091], 10.00th=[15664], 20.00th=[17171], 00:11:13.269 | 30.00th=[17957], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:11:13.269 | 70.00th=[19792], 80.00th=[20841], 90.00th=[21890], 95.00th=[23200], 00:11:13.269 | 99.00th=[23987], 99.50th=[23987], 99.90th=[28181], 99.95th=[28967], 00:11:13.269 | 99.99th=[29230] 00:11:13.269 bw ( KiB/s): min=13122, max=13896, per=29.28%, avg=13509.00, stdev=547.30, samples=2 00:11:13.269 iops : min= 3280, max= 3474, avg=3377.00, stdev=137.18, samples=2 00:11:13.269 lat (usec) : 750=0.02% 00:11:13.269 lat (msec) : 10=0.64%, 20=68.20%, 50=31.14% 00:11:13.269 cpu : usr=3.20%, sys=10.79%, ctx=196, majf=0, minf=11 00:11:13.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:13.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.269 issued rwts: total=3072,3501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.269 job3: (groupid=0, jobs=1): err= 0: pid=83135: Wed Nov 20 17:31:54 2024 00:11:13.269 read: IOPS=1955, BW=7820KiB/s (8008kB/s)(7836KiB/1002msec) 00:11:13.269 slat (usec): min=4, max=7521, avg=248.57, stdev=937.81 00:11:13.269 clat (usec): min=652, max=43140, avg=30434.04, stdev=7220.55 00:11:13.269 lat (usec): min=4989, max=43858, avg=30682.60, stdev=7232.05 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[ 5342], 5.00th=[16188], 10.00th=[20841], 20.00th=[21365], 00:11:13.269 | 30.00th=[30540], 40.00th=[31589], 50.00th=[32113], 60.00th=[33424], 00:11:13.269 | 70.00th=[34866], 80.00th=[35914], 90.00th=[36963], 95.00th=[38536], 00:11:13.269 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:11:13.269 | 99.99th=[43254] 00:11:13.269 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:11:13.269 slat (usec): min=11, max=8926, avg=241.30, stdev=855.25 00:11:13.269 clat (usec): min=13107, max=44985, avg=32171.42, stdev=7058.01 00:11:13.269 lat (usec): min=13174, max=46065, avg=32412.72, stdev=7068.21 00:11:13.269 clat percentiles (usec): 00:11:13.269 | 1.00th=[16909], 5.00th=[17957], 10.00th=[21365], 20.00th=[23462], 00:11:13.269 | 30.00th=[31327], 40.00th=[32637], 50.00th=[34866], 60.00th=[35914], 00:11:13.269 | 70.00th=[36963], 80.00th=[38011], 90.00th=[38536], 95.00th=[39584], 00:11:13.269 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:11:13.269 | 99.99th=[44827] 00:11:13.269 bw ( KiB/s): min= 8192, max= 8192, per=17.75%, avg=8192.00, stdev= 0.00, samples=2 00:11:13.269 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:13.269 lat (usec) : 750=0.02% 00:11:13.269 lat (msec) : 10=1.02%, 20=6.56%, 50=92.39% 00:11:13.269 cpu : usr=2.50%, sys=6.89%, ctx=652, majf=0, minf=9 00:11:13.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:13.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.269 issued rwts: total=1959,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.269 00:11:13.269 Run status group 0 (all jobs): 00:11:13.269 READ: bw=40.9MiB/s (42.9MB/s), 7661KiB/s-13.9MiB/s (7845kB/s-14.6MB/s), io=41.2MiB (43.2MB), run=1001-1006msec 00:11:13.269 WRITE: bw=45.1MiB/s (47.2MB/s), 8167KiB/s-15.6MiB/s (8364kB/s-16.3MB/s), io=45.3MiB (47.5MB), run=1001-1006msec 00:11:13.269 00:11:13.269 Disk stats (read/write): 00:11:13.269 nvme0n1: ios=1586/1595, merge=0/0, ticks=12803/13338, in_queue=26141, util=88.57% 00:11:13.269 nvme0n2: ios=3120/3487, merge=0/0, ticks=24349/26886, in_queue=51235, util=88.77% 00:11:13.269 nvme0n3: ios=2695/3072, merge=0/0, ticks=15836/16684, in_queue=32520, util=89.35% 00:11:13.269 nvme0n4: ios=1536/1604, merge=0/0, ticks=12690/13107, in_queue=25797, util=89.71% 00:11:13.269 17:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:13.269 [global] 00:11:13.269 thread=1 00:11:13.269 invalidate=1 00:11:13.269 rw=randwrite 00:11:13.269 time_based=1 00:11:13.269 runtime=1 00:11:13.269 ioengine=libaio 00:11:13.269 direct=1 00:11:13.269 bs=4096 00:11:13.269 iodepth=128 00:11:13.269 norandommap=0 00:11:13.269 numjobs=1 00:11:13.269 00:11:13.269 verify_dump=1 00:11:13.269 verify_backlog=512 00:11:13.270 verify_state_save=0 00:11:13.270 do_verify=1 00:11:13.270 verify=crc32c-intel 00:11:13.270 [job0] 00:11:13.270 filename=/dev/nvme0n1 00:11:13.270 [job1] 00:11:13.270 filename=/dev/nvme0n2 00:11:13.270 [job2] 00:11:13.270 filename=/dev/nvme0n3 00:11:13.270 [job3] 00:11:13.270 filename=/dev/nvme0n4 00:11:13.270 Could not set queue depth (nvme0n1) 00:11:13.270 Could not set queue depth (nvme0n2) 00:11:13.270 Could not set queue depth (nvme0n3) 00:11:13.270 Could not set queue depth (nvme0n4) 00:11:13.270 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.270 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.270 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.270 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.270 fio-3.35 00:11:13.270 Starting 4 threads 00:11:14.647 00:11:14.648 job0: (groupid=0, jobs=1): err= 0: pid=83193: Wed Nov 20 17:31:56 2024 00:11:14.648 read: IOPS=1994, BW=7976KiB/s (8168kB/s)(8040KiB/1008msec) 00:11:14.648 slat (usec): min=6, max=15709, avg=266.53, stdev=1440.02 00:11:14.648 clat (usec): min=731, max=61391, avg=33752.41, stdev=10031.99 00:11:14.648 lat (usec): min=7303, max=61405, avg=34018.94, stdev=10007.08 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[ 7701], 5.00th=[23462], 10.00th=[26084], 20.00th=[27132], 00:11:14.648 | 30.00th=[27657], 40.00th=[28443], 50.00th=[31065], 60.00th=[33424], 00:11:14.648 | 70.00th=[35914], 80.00th=[41681], 90.00th=[46400], 95.00th=[53740], 00:11:14.648 | 99.00th=[61080], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:11:14.648 | 99.99th=[61604] 00:11:14.648 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:11:14.648 slat (usec): min=14, max=10016, avg=221.26, stdev=1157.29 00:11:14.648 clat (usec): min=15535, max=43436, avg=28565.23, stdev=6597.59 00:11:14.648 lat (usec): min=20035, max=43462, avg=28786.49, stdev=6558.21 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[18744], 5.00th=[20579], 10.00th=[21627], 20.00th=[22676], 00:11:14.648 | 30.00th=[23462], 40.00th=[24511], 50.00th=[26870], 60.00th=[28967], 00:11:14.648 | 70.00th=[33424], 80.00th=[34866], 90.00th=[37487], 95.00th=[41157], 00:11:14.648 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:14.648 | 99.99th=[43254] 00:11:14.648 bw ( KiB/s): min= 8192, max= 8192, per=17.42%, avg=8192.00, stdev= 0.00, samples=2 00:11:14.648 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:14.648 lat (usec) : 750=0.02% 00:11:14.648 lat (msec) : 10=0.79%, 20=1.50%, 50=93.72%, 100=3.97% 00:11:14.648 cpu : usr=1.39%, sys=6.45%, ctx=145, majf=0, minf=11 00:11:14.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:14.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.648 issued rwts: total=2010,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.648 job1: (groupid=0, jobs=1): err= 0: pid=83194: Wed Nov 20 17:31:56 2024 00:11:14.648 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:11:14.648 slat (usec): min=5, max=5413, avg=119.17, stdev=567.33 00:11:14.648 clat (usec): min=10967, max=24205, avg=15670.50, stdev=1787.78 00:11:14.648 lat (usec): min=11455, max=24228, avg=15789.68, stdev=1825.45 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[11994], 5.00th=[12649], 10.00th=[13304], 20.00th=[14353], 00:11:14.648 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15664], 60.00th=[15926], 00:11:14.648 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17957], 95.00th=[18744], 00:11:14.648 | 99.00th=[20055], 99.50th=[20317], 99.90th=[23200], 99.95th=[23462], 00:11:14.648 | 99.99th=[24249] 00:11:14.648 write: IOPS=4149, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1005msec); 0 zone resets 00:11:14.648 slat (usec): min=9, max=5582, avg=113.94, stdev=504.74 00:11:14.648 clat (usec): min=4088, max=21797, avg=15028.26, stdev=1983.41 00:11:14.648 lat (usec): min=4804, max=21847, avg=15142.20, stdev=1992.97 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[ 6849], 5.00th=[11469], 10.00th=[12780], 20.00th=[13960], 00:11:14.648 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15664], 00:11:14.648 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16909], 95.00th=[18220], 00:11:14.648 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:11:14.648 | 99.99th=[21890] 00:11:14.648 bw ( KiB/s): min=16384, max=16384, per=34.84%, avg=16384.00, stdev= 0.00, samples=2 00:11:14.648 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:14.648 lat (msec) : 10=0.77%, 20=98.31%, 50=0.92% 00:11:14.648 cpu : usr=3.09%, sys=15.44%, ctx=456, majf=0, minf=9 00:11:14.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:14.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.648 issued rwts: total=4096,4170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.648 job2: (groupid=0, jobs=1): err= 0: pid=83195: Wed Nov 20 17:31:56 2024 00:11:14.648 read: IOPS=1651, BW=6606KiB/s (6764kB/s)(6652KiB/1007msec) 00:11:14.648 slat (usec): min=7, max=13631, avg=267.20, stdev=1417.45 00:11:14.648 clat (usec): min=1294, max=52978, avg=32919.62, stdev=7094.04 00:11:14.648 lat (usec): min=7387, max=55856, avg=33186.83, stdev=7185.16 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[ 7635], 5.00th=[19530], 10.00th=[26084], 20.00th=[28967], 00:11:14.648 | 30.00th=[30016], 40.00th=[31327], 50.00th=[32637], 60.00th=[34341], 00:11:14.648 | 70.00th=[35914], 80.00th=[39060], 90.00th=[40633], 95.00th=[43779], 00:11:14.648 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[53216], 00:11:14.648 | 99.99th=[53216] 00:11:14.648 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:11:14.648 slat (usec): min=14, max=9932, avg=265.07, stdev=1093.44 00:11:14.648 clat (usec): min=19462, max=55748, avg=34978.74, stdev=9034.90 00:11:14.648 lat (usec): min=19482, max=55774, avg=35243.81, stdev=9115.37 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[20579], 5.00th=[21627], 10.00th=[23462], 20.00th=[25297], 00:11:14.648 | 30.00th=[28443], 40.00th=[32375], 50.00th=[33162], 60.00th=[38011], 00:11:14.648 | 70.00th=[41157], 80.00th=[44303], 90.00th=[47449], 95.00th=[49021], 00:11:14.648 | 99.00th=[52691], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:11:14.648 | 99.99th=[55837] 00:11:14.648 bw ( KiB/s): min= 8184, max= 8208, per=17.43%, avg=8196.00, stdev=16.97, samples=2 00:11:14.648 iops : min= 2046, max= 2052, avg=2049.00, stdev= 4.24, samples=2 00:11:14.648 lat (msec) : 2=0.03%, 10=1.10%, 20=1.29%, 50=94.99%, 100=2.59% 00:11:14.648 cpu : usr=2.68%, sys=5.67%, ctx=212, majf=0, minf=5 00:11:14.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:11:14.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.648 issued rwts: total=1663,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.648 job3: (groupid=0, jobs=1): err= 0: pid=83196: Wed Nov 20 17:31:56 2024 00:11:14.648 read: IOPS=3407, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1005msec) 00:11:14.648 slat (usec): min=6, max=5400, avg=140.98, stdev=690.50 00:11:14.648 clat (usec): min=409, max=25374, avg=18174.27, stdev=2232.47 00:11:14.648 lat (usec): min=4443, max=30257, avg=18315.25, stdev=2149.52 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[ 8848], 5.00th=[14746], 10.00th=[16450], 20.00th=[17433], 00:11:14.648 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:11:14.648 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20055], 95.00th=[20841], 00:11:14.648 | 99.00th=[22152], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:11:14.648 | 99.99th=[25297] 00:11:14.648 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:14.648 slat (usec): min=14, max=5231, avg=136.17, stdev=635.70 00:11:14.648 clat (usec): min=13473, max=24101, avg=17974.47, stdev=2204.40 00:11:14.648 lat (usec): min=13496, max=24126, avg=18110.64, stdev=2198.69 00:11:14.648 clat percentiles (usec): 00:11:14.648 | 1.00th=[14353], 5.00th=[14615], 10.00th=[15401], 20.00th=[15926], 00:11:14.648 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17957], 60.00th=[18482], 00:11:14.648 | 70.00th=[19268], 80.00th=[20055], 90.00th=[20579], 95.00th=[21627], 00:11:14.648 | 99.00th=[23200], 99.50th=[23725], 99.90th=[23987], 99.95th=[23987], 00:11:14.648 | 99.99th=[23987] 00:11:14.648 bw ( KiB/s): min=14144, max=14557, per=30.52%, avg=14350.50, stdev=292.04, samples=2 00:11:14.648 iops : min= 3536, max= 3639, avg=3587.50, stdev=72.83, samples=2 00:11:14.648 lat (usec) : 500=0.01% 00:11:14.648 lat (msec) : 10=0.91%, 20=82.25%, 50=16.82% 00:11:14.648 cpu : usr=3.09%, sys=11.55%, ctx=294, majf=0, minf=12 00:11:14.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:14.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.648 issued rwts: total=3425,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.648 00:11:14.648 Run status group 0 (all jobs): 00:11:14.648 READ: bw=43.4MiB/s (45.5MB/s), 6606KiB/s-15.9MiB/s (6764kB/s-16.7MB/s), io=43.7MiB (45.8MB), run=1005-1008msec 00:11:14.648 WRITE: bw=45.9MiB/s (48.2MB/s), 8127KiB/s-16.2MiB/s (8322kB/s-17.0MB/s), io=46.3MiB (48.5MB), run=1005-1008msec 00:11:14.648 00:11:14.648 Disk stats (read/write): 00:11:14.648 nvme0n1: ios=1675/2048, merge=0/0, ticks=12714/13542, in_queue=26256, util=88.88% 00:11:14.648 nvme0n2: ios=3633/3620, merge=0/0, ticks=17256/15567, in_queue=32823, util=90.90% 00:11:14.648 nvme0n3: ios=1491/1536, merge=0/0, ticks=16268/18681, in_queue=34949, util=89.71% 00:11:14.648 nvme0n4: ios=2975/3072, merge=0/0, ticks=12959/12085, in_queue=25044, util=89.65% 00:11:14.648 17:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:14.648 17:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=83209 00:11:14.648 17:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:14.648 17:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:14.648 [global] 00:11:14.648 thread=1 00:11:14.648 invalidate=1 00:11:14.648 rw=read 00:11:14.648 time_based=1 00:11:14.648 runtime=10 00:11:14.648 ioengine=libaio 00:11:14.648 direct=1 00:11:14.648 bs=4096 00:11:14.648 iodepth=1 00:11:14.648 norandommap=1 00:11:14.648 numjobs=1 00:11:14.648 00:11:14.648 [job0] 00:11:14.648 filename=/dev/nvme0n1 00:11:14.648 [job1] 00:11:14.649 filename=/dev/nvme0n2 00:11:14.649 [job2] 00:11:14.649 filename=/dev/nvme0n3 00:11:14.649 [job3] 00:11:14.649 filename=/dev/nvme0n4 00:11:14.649 Could not set queue depth (nvme0n1) 00:11:14.649 Could not set queue depth (nvme0n2) 00:11:14.649 Could not set queue depth (nvme0n3) 00:11:14.649 Could not set queue depth (nvme0n4) 00:11:14.649 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.649 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.649 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.649 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.649 fio-3.35 00:11:14.649 Starting 4 threads 00:11:17.933 17:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:17.933 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47128576, buflen=4096 00:11:17.933 fio: pid=83252, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:17.933 17:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:17.933 fio: pid=83251, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:17.933 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49377280, buflen=4096 00:11:17.933 17:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.933 17:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:18.501 fio: pid=83249, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.501 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45776896, buflen=4096 00:11:18.501 17:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.501 17:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:18.760 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51118080, buflen=4096 00:11:18.760 fio: pid=83250, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:18.760 00:11:18.760 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83249: Wed Nov 20 17:32:00 2024 00:11:18.760 read: IOPS=3079, BW=12.0MiB/s (12.6MB/s)(43.7MiB/3630msec) 00:11:18.761 slat (usec): min=7, max=13576, avg=21.20, stdev=196.86 00:11:18.761 clat (usec): min=123, max=4726, avg=302.01, stdev=100.22 00:11:18.761 lat (usec): min=135, max=13952, avg=323.21, stdev=222.15 00:11:18.761 clat percentiles (usec): 00:11:18.761 | 1.00th=[ 155], 5.00th=[ 233], 10.00th=[ 253], 20.00th=[ 273], 00:11:18.761 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:11:18.761 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 359], 00:11:18.761 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 1237], 99.95th=[ 2671], 00:11:18.761 | 99.99th=[ 3687] 00:11:18.761 bw ( KiB/s): min=11902, max=12800, per=25.94%, avg=12339.14, stdev=316.73, samples=7 00:11:18.761 iops : min= 2975, max= 3200, avg=3084.71, stdev=79.30, samples=7 00:11:18.761 lat (usec) : 250=8.88%, 500=89.54%, 750=1.37%, 1000=0.06% 00:11:18.761 lat (msec) : 2=0.07%, 4=0.06%, 10=0.01% 00:11:18.761 cpu : usr=1.02%, sys=4.38%, ctx=11183, majf=0, minf=1 00:11:18.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 issued rwts: total=11177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.761 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83250: Wed Nov 20 17:32:00 2024 00:11:18.761 read: IOPS=3143, BW=12.3MiB/s (12.9MB/s)(48.8MiB/3970msec) 00:11:18.761 slat (usec): min=9, max=16720, avg=25.31, stdev=251.45 00:11:18.761 clat (usec): min=78, max=113094, avg=290.66, stdev=1032.93 00:11:18.761 lat (usec): min=131, max=113121, avg=315.98, stdev=1063.10 00:11:18.761 clat percentiles (usec): 00:11:18.761 | 1.00th=[ 133], 5.00th=[ 155], 10.00th=[ 192], 20.00th=[ 247], 00:11:18.761 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:11:18.761 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 338], 00:11:18.761 | 99.00th=[ 408], 99.50th=[ 502], 99.90th=[ 1237], 99.95th=[ 1893], 00:11:18.761 | 99.99th=[22676] 00:11:18.761 bw ( KiB/s): min=12184, max=13017, per=26.47%, avg=12592.14, stdev=342.31, samples=7 00:11:18.761 iops : min= 3046, max= 3254, avg=3148.00, stdev=85.53, samples=7 00:11:18.761 lat (usec) : 100=0.01%, 250=21.00%, 500=78.46%, 750=0.28%, 1000=0.10% 00:11:18.761 lat (msec) : 2=0.09%, 4=0.03%, 50=0.01%, 250=0.01% 00:11:18.761 cpu : usr=1.01%, sys=5.09%, ctx=12502, majf=0, minf=1 00:11:18.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 issued rwts: total=12481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.761 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83251: Wed Nov 20 17:32:00 2024 00:11:18.761 read: IOPS=3684, BW=14.4MiB/s (15.1MB/s)(47.1MiB/3272msec) 00:11:18.761 slat (usec): min=7, max=8684, avg=15.78, stdev=103.34 00:11:18.761 clat (usec): min=140, max=4698, avg=254.03, stdev=91.08 00:11:18.761 lat (usec): min=152, max=8862, avg=269.82, stdev=137.59 00:11:18.761 clat percentiles (usec): 00:11:18.761 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:11:18.761 | 30.00th=[ 206], 40.00th=[ 221], 50.00th=[ 243], 60.00th=[ 265], 00:11:18.761 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 351], 00:11:18.761 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 619], 99.95th=[ 1090], 00:11:18.761 | 99.99th=[ 4621] 00:11:18.761 bw ( KiB/s): min=13032, max=18496, per=32.13%, avg=15284.00, stdev=2137.73, samples=6 00:11:18.761 iops : min= 3258, max= 4624, avg=3821.00, stdev=534.43, samples=6 00:11:18.761 lat (usec) : 250=53.38%, 500=45.46%, 750=1.09% 00:11:18.761 lat (msec) : 2=0.03%, 4=0.01%, 10=0.02% 00:11:18.761 cpu : usr=1.13%, sys=4.65%, ctx=12070, majf=0, minf=2 00:11:18.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 issued rwts: total=12056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.761 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83252: Wed Nov 20 17:32:00 2024 00:11:18.761 read: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(44.9MiB/2964msec) 00:11:18.761 slat (nsec): min=7574, max=86086, avg=16159.32, stdev=6338.07 00:11:18.761 clat (usec): min=161, max=6991, avg=239.68, stdev=126.37 00:11:18.761 lat (usec): min=176, max=7025, avg=255.84, stdev=126.56 00:11:18.761 clat percentiles (usec): 00:11:18.761 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:11:18.761 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 235], 00:11:18.761 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 326], 00:11:18.761 | 99.00th=[ 355], 99.50th=[ 383], 99.90th=[ 1483], 99.95th=[ 3130], 00:11:18.761 | 99.99th=[ 6128] 00:11:18.761 bw ( KiB/s): min=13032, max=18656, per=33.71%, avg=16036.80, stdev=2614.48, samples=5 00:11:18.761 iops : min= 3258, max= 4664, avg=4009.20, stdev=653.62, samples=5 00:11:18.761 lat (usec) : 250=64.00%, 500=35.78%, 750=0.05%, 1000=0.03% 00:11:18.761 lat (msec) : 2=0.03%, 4=0.06%, 10=0.03% 00:11:18.761 cpu : usr=1.48%, sys=5.10%, ctx=11508, majf=0, minf=2 00:11:18.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.761 issued rwts: total=11507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.761 00:11:18.761 Run status group 0 (all jobs): 00:11:18.761 READ: bw=46.5MiB/s (48.7MB/s), 12.0MiB/s-15.2MiB/s (12.6MB/s-15.9MB/s), io=184MiB (193MB), run=2964-3970msec 00:11:18.761 00:11:18.761 Disk stats (read/write): 00:11:18.761 nvme0n1: ios=11134/0, merge=0/0, ticks=3464/0, in_queue=3464, util=95.51% 00:11:18.761 nvme0n2: ios=12454/0, merge=0/0, ticks=3629/0, in_queue=3629, util=95.49% 00:11:18.761 nvme0n3: ios=11760/0, merge=0/0, ticks=2938/0, in_queue=2938, util=96.37% 00:11:18.761 nvme0n4: ios=11196/0, merge=0/0, ticks=2692/0, in_queue=2692, util=96.49% 00:11:18.761 17:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.761 17:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:19.331 17:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.331 17:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:19.591 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.591 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:19.850 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.850 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:20.109 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.109 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:20.368 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:20.368 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 83209 00:11:20.368 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:20.368 17:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.369 nvmf hotplug test: fio failed as expected 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:20.369 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.628 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:20.628 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:20.628 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:20.628 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:20.628 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:20.628 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:20.628 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.888 rmmod nvme_tcp 00:11:20.888 rmmod nvme_fabrics 00:11:20.888 rmmod nvme_keyring 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 82722 ']' 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 82722 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 82722 ']' 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 82722 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82722 00:11:20.888 killing process with pid 82722 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82722' 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 82722 00:11:20.888 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 82722 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:21.146 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:21.147 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:21.147 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:21.147 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:21.147 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:21.147 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:21.147 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.405 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.405 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:21.405 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.405 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.405 17:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:21.405 00:11:21.405 real 0m20.333s 00:11:21.405 user 1m17.735s 00:11:21.405 sys 0m8.277s 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.405 ************************************ 00:11:21.405 END TEST nvmf_fio_target 00:11:21.405 ************************************ 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.405 ************************************ 00:11:21.405 START TEST nvmf_bdevio 00:11:21.405 ************************************ 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.405 * Looking for test storage... 00:11:21.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:21.405 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:21.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.664 --rc genhtml_branch_coverage=1 00:11:21.664 --rc genhtml_function_coverage=1 00:11:21.664 --rc genhtml_legend=1 00:11:21.664 --rc geninfo_all_blocks=1 00:11:21.664 --rc geninfo_unexecuted_blocks=1 00:11:21.664 00:11:21.664 ' 00:11:21.664 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:21.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.665 --rc genhtml_branch_coverage=1 00:11:21.665 --rc genhtml_function_coverage=1 00:11:21.665 --rc genhtml_legend=1 00:11:21.665 --rc geninfo_all_blocks=1 00:11:21.665 --rc geninfo_unexecuted_blocks=1 00:11:21.665 00:11:21.665 ' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:21.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.665 --rc genhtml_branch_coverage=1 00:11:21.665 --rc genhtml_function_coverage=1 00:11:21.665 --rc genhtml_legend=1 00:11:21.665 --rc geninfo_all_blocks=1 00:11:21.665 --rc geninfo_unexecuted_blocks=1 00:11:21.665 00:11:21.665 ' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:21.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.665 --rc genhtml_branch_coverage=1 00:11:21.665 --rc genhtml_function_coverage=1 00:11:21.665 --rc genhtml_legend=1 00:11:21.665 --rc geninfo_all_blocks=1 00:11:21.665 --rc geninfo_unexecuted_blocks=1 00:11:21.665 00:11:21.665 ' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:21.665 Cannot find device "nvmf_init_br" 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:21.665 Cannot find device "nvmf_init_br2" 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:21.665 Cannot find device "nvmf_tgt_br" 00:11:21.665 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.666 Cannot find device "nvmf_tgt_br2" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:21.666 Cannot find device "nvmf_init_br" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:21.666 Cannot find device "nvmf_init_br2" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:21.666 Cannot find device "nvmf_tgt_br" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:21.666 Cannot find device "nvmf_tgt_br2" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:21.666 Cannot find device "nvmf_br" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:21.666 Cannot find device "nvmf_init_if" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:21.666 Cannot find device "nvmf_init_if2" 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.666 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.924 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:21.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:11:21.925 00:11:21.925 --- 10.0.0.3 ping statistics --- 00:11:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.925 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:21.925 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:21.925 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:11:21.925 00:11:21.925 --- 10.0.0.4 ping statistics --- 00:11:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.925 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:21.925 00:11:21.925 --- 10.0.0.1 ping statistics --- 00:11:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.925 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:21.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:11:21.925 00:11:21.925 --- 10.0.0.2 ping statistics --- 00:11:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.925 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=83646 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 83646 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 83646 ']' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.925 17:32:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.184 [2024-11-20 17:32:03.768419] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:22.184 [2024-11-20 17:32:03.768511] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.184 [2024-11-20 17:32:03.914238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.442 [2024-11-20 17:32:03.992811] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.442 [2024-11-20 17:32:03.992869] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.442 [2024-11-20 17:32:03.992883] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.442 [2024-11-20 17:32:03.992894] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.442 [2024-11-20 17:32:03.992903] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.442 [2024-11-20 17:32:03.993097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.442 [2024-11-20 17:32:03.993743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:22.442 [2024-11-20 17:32:03.993896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:22.442 [2024-11-20 17:32:03.993986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.010 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.010 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:23.010 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:23.010 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.010 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.269 [2024-11-20 17:32:04.839855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.269 Malloc0 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.269 [2024-11-20 17:32:04.896890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:23.269 { 00:11:23.269 "params": { 00:11:23.269 "name": "Nvme$subsystem", 00:11:23.269 "trtype": "$TEST_TRANSPORT", 00:11:23.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:23.269 "adrfam": "ipv4", 00:11:23.269 "trsvcid": "$NVMF_PORT", 00:11:23.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:23.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:23.269 "hdgst": ${hdgst:-false}, 00:11:23.269 "ddgst": ${ddgst:-false} 00:11:23.269 }, 00:11:23.269 "method": "bdev_nvme_attach_controller" 00:11:23.269 } 00:11:23.269 EOF 00:11:23.269 )") 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:23.269 17:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:23.269 "params": { 00:11:23.269 "name": "Nvme1", 00:11:23.269 "trtype": "tcp", 00:11:23.269 "traddr": "10.0.0.3", 00:11:23.269 "adrfam": "ipv4", 00:11:23.269 "trsvcid": "4420", 00:11:23.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:23.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:23.269 "hdgst": false, 00:11:23.269 "ddgst": false 00:11:23.269 }, 00:11:23.269 "method": "bdev_nvme_attach_controller" 00:11:23.269 }' 00:11:23.269 [2024-11-20 17:32:04.966935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:23.269 [2024-11-20 17:32:04.967643] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83700 ] 00:11:23.529 [2024-11-20 17:32:05.114983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.529 [2024-11-20 17:32:05.211514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.529 [2024-11-20 17:32:05.211668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.529 [2024-11-20 17:32:05.211676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.788 I/O targets: 00:11:23.788 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:23.788 00:11:23.788 00:11:23.788 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.788 http://cunit.sourceforge.net/ 00:11:23.788 00:11:23.788 00:11:23.788 Suite: bdevio tests on: Nvme1n1 00:11:23.788 Test: blockdev write read block ...passed 00:11:23.788 Test: blockdev write zeroes read block ...passed 00:11:23.788 Test: blockdev write zeroes read no split ...passed 00:11:23.788 Test: blockdev write zeroes read split ...passed 00:11:23.788 Test: blockdev write zeroes read split partial ...passed 00:11:23.788 Test: blockdev reset ...[2024-11-20 17:32:05.546733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:23.788 [2024-11-20 17:32:05.546904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde9320 (9): Bad file descriptor 00:11:23.788 [2024-11-20 17:32:05.563618] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:23.788 passed 00:11:23.788 Test: blockdev write read 8 blocks ...passed 00:11:23.788 Test: blockdev write read size > 128k ...passed 00:11:23.788 Test: blockdev write read invalid size ...passed 00:11:24.046 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.046 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.046 Test: blockdev write read max offset ...passed 00:11:24.046 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.046 Test: blockdev writev readv 8 blocks ...passed 00:11:24.046 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.046 Test: blockdev writev readv block ...passed 00:11:24.046 Test: blockdev writev readv size > 128k ...passed 00:11:24.046 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.046 Test: blockdev comparev and writev ...[2024-11-20 17:32:05.739977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.046 [2024-11-20 17:32:05.740048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:24.046 [2024-11-20 17:32:05.740093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.047 [2024-11-20 17:32:05.740106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.740544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.047 [2024-11-20 17:32:05.740581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.740598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.047 [2024-11-20 17:32:05.740608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.740999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.047 [2024-11-20 17:32:05.741028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.741046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.047 [2024-11-20 17:32:05.741056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.741480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.047 [2024-11-20 17:32:05.741511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.741528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.047 [2024-11-20 17:32:05.741539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:24.047 passed 00:11:24.047 Test: blockdev nvme passthru rw ...passed 00:11:24.047 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:32:05.824446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.047 [2024-11-20 17:32:05.824477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.824647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.047 [2024-11-20 17:32:05.824664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.824781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.047 [2024-11-20 17:32:05.824797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:24.047 [2024-11-20 17:32:05.824913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.047 [2024-11-20 17:32:05.824928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:24.047 passed 00:11:24.305 Test: blockdev nvme admin passthru ...passed 00:11:24.305 Test: blockdev copy ...passed 00:11:24.305 00:11:24.305 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.305 suites 1 1 n/a 0 0 00:11:24.305 tests 23 23 23 0 0 00:11:24.305 asserts 152 152 152 0 n/a 00:11:24.305 00:11:24.305 Elapsed time = 0.905 seconds 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.563 rmmod nvme_tcp 00:11:24.563 rmmod nvme_fabrics 00:11:24.563 rmmod nvme_keyring 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 83646 ']' 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 83646 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 83646 ']' 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 83646 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83646 00:11:24.563 killing process with pid 83646 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83646' 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 83646 00:11:24.563 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 83646 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:24.822 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:25.080 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:25.080 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:25.080 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:25.080 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:25.080 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:25.080 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.080 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:25.081 00:11:25.081 real 0m3.673s 00:11:25.081 user 0m12.377s 00:11:25.081 sys 0m1.021s 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.081 ************************************ 00:11:25.081 END TEST nvmf_bdevio 00:11:25.081 ************************************ 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:25.081 00:11:25.081 real 3m34.415s 00:11:25.081 user 11m4.737s 00:11:25.081 sys 1m3.127s 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:25.081 ************************************ 00:11:25.081 END TEST nvmf_target_core 00:11:25.081 ************************************ 00:11:25.081 17:32:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.081 17:32:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.081 17:32:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.081 17:32:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.081 ************************************ 00:11:25.081 START TEST nvmf_target_extra 00:11:25.081 ************************************ 00:11:25.081 17:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.340 * Looking for test storage... 00:11:25.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:25.340 17:32:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.340 --rc genhtml_branch_coverage=1 00:11:25.340 --rc genhtml_function_coverage=1 00:11:25.340 --rc genhtml_legend=1 00:11:25.340 --rc geninfo_all_blocks=1 00:11:25.340 --rc geninfo_unexecuted_blocks=1 00:11:25.340 00:11:25.340 ' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.340 --rc genhtml_branch_coverage=1 00:11:25.340 --rc genhtml_function_coverage=1 00:11:25.340 --rc genhtml_legend=1 00:11:25.340 --rc geninfo_all_blocks=1 00:11:25.340 --rc geninfo_unexecuted_blocks=1 00:11:25.340 00:11:25.340 ' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.340 --rc genhtml_branch_coverage=1 00:11:25.340 --rc genhtml_function_coverage=1 00:11:25.340 --rc genhtml_legend=1 00:11:25.340 --rc geninfo_all_blocks=1 00:11:25.340 --rc geninfo_unexecuted_blocks=1 00:11:25.340 00:11:25.340 ' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.340 --rc genhtml_branch_coverage=1 00:11:25.340 --rc genhtml_function_coverage=1 00:11:25.340 --rc genhtml_legend=1 00:11:25.340 --rc geninfo_all_blocks=1 00:11:25.340 --rc geninfo_unexecuted_blocks=1 00:11:25.340 00:11:25.340 ' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.340 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.341 ************************************ 00:11:25.341 START TEST nvmf_example 00:11:25.341 ************************************ 00:11:25.341 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.341 * Looking for test storage... 00:11:25.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.601 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:25.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.602 --rc genhtml_branch_coverage=1 00:11:25.602 --rc genhtml_function_coverage=1 00:11:25.602 --rc genhtml_legend=1 00:11:25.602 --rc geninfo_all_blocks=1 00:11:25.602 --rc geninfo_unexecuted_blocks=1 00:11:25.602 00:11:25.602 ' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:25.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.602 --rc genhtml_branch_coverage=1 00:11:25.602 --rc genhtml_function_coverage=1 00:11:25.602 --rc genhtml_legend=1 00:11:25.602 --rc geninfo_all_blocks=1 00:11:25.602 --rc geninfo_unexecuted_blocks=1 00:11:25.602 00:11:25.602 ' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:25.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.602 --rc genhtml_branch_coverage=1 00:11:25.602 --rc genhtml_function_coverage=1 00:11:25.602 --rc genhtml_legend=1 00:11:25.602 --rc geninfo_all_blocks=1 00:11:25.602 --rc geninfo_unexecuted_blocks=1 00:11:25.602 00:11:25.602 ' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:25.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.602 --rc genhtml_branch_coverage=1 00:11:25.602 --rc genhtml_function_coverage=1 00:11:25.602 --rc genhtml_legend=1 00:11:25.602 --rc geninfo_all_blocks=1 00:11:25.602 --rc geninfo_unexecuted_blocks=1 00:11:25.602 00:11:25.602 ' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:25.602 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:25.603 Cannot find device "nvmf_init_br" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:25.603 Cannot find device "nvmf_init_br2" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:25.603 Cannot find device "nvmf_tgt_br" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:25.603 Cannot find device "nvmf_tgt_br2" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:25.603 Cannot find device "nvmf_init_br" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:25.603 Cannot find device "nvmf_init_br2" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:25.603 Cannot find device "nvmf_tgt_br" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:25.603 Cannot find device "nvmf_tgt_br2" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:25.603 Cannot find device "nvmf_br" 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:11:25.603 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:25.863 Cannot find device "nvmf_init_if" 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:25.863 Cannot find device "nvmf_init_if2" 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:25.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:25.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:25.863 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:26.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:26.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:11:26.122 00:11:26.122 --- 10.0.0.3 ping statistics --- 00:11:26.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.122 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:26.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:26.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:11:26.122 00:11:26.122 --- 10.0.0.4 ping statistics --- 00:11:26.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.122 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:26.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:11:26.122 00:11:26.122 --- 10.0.0.1 ping statistics --- 00:11:26.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.122 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:26.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:26.122 00:11:26.122 --- 10.0.0.2 ping statistics --- 00:11:26.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.122 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # return 0 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=83986 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 83986 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 83986 ']' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.122 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.091 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:11:27.350 17:32:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:39.553 Initializing NVMe Controllers 00:11:39.553 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.553 Initialization complete. Launching workers. 00:11:39.553 ======================================================== 00:11:39.553 Latency(us) 00:11:39.553 Device Information : IOPS MiB/s Average min max 00:11:39.553 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15538.52 60.70 4118.28 574.24 23118.84 00:11:39.553 ======================================================== 00:11:39.553 Total : 15538.52 60.70 4118.28 574.24 23118.84 00:11:39.553 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.553 rmmod nvme_tcp 00:11:39.553 rmmod nvme_fabrics 00:11:39.553 rmmod nvme_keyring 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 83986 ']' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 83986 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 83986 ']' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 83986 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83986 00:11:39.553 killing process with pid 83986 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83986' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 83986 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 83986 00:11:39.553 nvmf threads initialize successfully 00:11:39.553 bdev subsystem init successfully 00:11:39.553 created a nvmf target service 00:11:39.553 create targets's poll groups done 00:11:39.553 all subsystems of target started 00:11:39.553 nvmf target is running 00:11:39.553 all subsystems of target stopped 00:11:39.553 destroy targets's poll groups done 00:11:39.553 destroyed the nvmf target service 00:11:39.553 bdev subsystem finish successfully 00:11:39.553 nvmf threads destroy successfully 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.553 ************************************ 00:11:39.553 END TEST nvmf_example 00:11:39.553 ************************************ 00:11:39.553 00:11:39.553 real 0m12.781s 00:11:39.553 user 0m44.586s 00:11:39.553 sys 0m2.306s 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.553 ************************************ 00:11:39.553 START TEST nvmf_filesystem 00:11:39.553 ************************************ 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.553 * Looking for test storage... 00:11:39.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:39.553 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:39.553 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:39.553 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.553 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.554 --rc genhtml_branch_coverage=1 00:11:39.554 --rc genhtml_function_coverage=1 00:11:39.554 --rc genhtml_legend=1 00:11:39.554 --rc geninfo_all_blocks=1 00:11:39.554 --rc geninfo_unexecuted_blocks=1 00:11:39.554 00:11:39.554 ' 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.554 --rc genhtml_branch_coverage=1 00:11:39.554 --rc genhtml_function_coverage=1 00:11:39.554 --rc genhtml_legend=1 00:11:39.554 --rc geninfo_all_blocks=1 00:11:39.554 --rc geninfo_unexecuted_blocks=1 00:11:39.554 00:11:39.554 ' 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.554 --rc genhtml_branch_coverage=1 00:11:39.554 --rc genhtml_function_coverage=1 00:11:39.554 --rc genhtml_legend=1 00:11:39.554 --rc geninfo_all_blocks=1 00:11:39.554 --rc geninfo_unexecuted_blocks=1 00:11:39.554 00:11:39.554 ' 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.554 --rc genhtml_branch_coverage=1 00:11:39.554 --rc genhtml_function_coverage=1 00:11:39.554 --rc genhtml_legend=1 00:11:39.554 --rc geninfo_all_blocks=1 00:11:39.554 --rc geninfo_unexecuted_blocks=1 00:11:39.554 00:11:39.554 ' 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:39.554 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:39.555 #define SPDK_CONFIG_H 00:11:39.555 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:39.555 #define SPDK_CONFIG_APPS 1 00:11:39.555 #define SPDK_CONFIG_ARCH native 00:11:39.555 #undef SPDK_CONFIG_ASAN 00:11:39.555 #define SPDK_CONFIG_AVAHI 1 00:11:39.555 #undef SPDK_CONFIG_CET 00:11:39.555 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:39.555 #define SPDK_CONFIG_COVERAGE 1 00:11:39.555 #define SPDK_CONFIG_CROSS_PREFIX 00:11:39.555 #undef SPDK_CONFIG_CRYPTO 00:11:39.555 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:39.555 #undef SPDK_CONFIG_CUSTOMOCF 00:11:39.555 #undef SPDK_CONFIG_DAOS 00:11:39.555 #define SPDK_CONFIG_DAOS_DIR 00:11:39.555 #define SPDK_CONFIG_DEBUG 1 00:11:39.555 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:39.555 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:11:39.555 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:11:39.555 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:11:39.555 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:39.555 #undef SPDK_CONFIG_DPDK_UADK 00:11:39.555 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:39.555 #define SPDK_CONFIG_EXAMPLES 1 00:11:39.555 #undef SPDK_CONFIG_FC 00:11:39.555 #define SPDK_CONFIG_FC_PATH 00:11:39.555 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:39.555 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:39.555 #define SPDK_CONFIG_FSDEV 1 00:11:39.555 #undef SPDK_CONFIG_FUSE 00:11:39.555 #undef SPDK_CONFIG_FUZZER 00:11:39.555 #define SPDK_CONFIG_FUZZER_LIB 00:11:39.555 #define SPDK_CONFIG_GOLANG 1 00:11:39.555 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:39.555 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:39.555 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:39.555 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:39.555 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:39.555 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:39.555 #undef SPDK_CONFIG_HAVE_LZ4 00:11:39.555 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:39.555 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:39.555 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:39.555 #define SPDK_CONFIG_IDXD 1 00:11:39.555 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:39.555 #undef SPDK_CONFIG_IPSEC_MB 00:11:39.555 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:39.555 #define SPDK_CONFIG_ISAL 1 00:11:39.555 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:39.555 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:39.555 #define SPDK_CONFIG_LIBDIR 00:11:39.555 #undef SPDK_CONFIG_LTO 00:11:39.555 #define SPDK_CONFIG_MAX_LCORES 128 00:11:39.555 #define SPDK_CONFIG_NVME_CUSE 1 00:11:39.555 #undef SPDK_CONFIG_OCF 00:11:39.555 #define SPDK_CONFIG_OCF_PATH 00:11:39.555 #define SPDK_CONFIG_OPENSSL_PATH 00:11:39.555 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:39.555 #define SPDK_CONFIG_PGO_DIR 00:11:39.555 #undef SPDK_CONFIG_PGO_USE 00:11:39.555 #define SPDK_CONFIG_PREFIX /usr/local 00:11:39.555 #undef SPDK_CONFIG_RAID5F 00:11:39.555 #undef SPDK_CONFIG_RBD 00:11:39.555 #define SPDK_CONFIG_RDMA 1 00:11:39.555 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:39.555 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:39.555 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:39.555 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:39.555 #define SPDK_CONFIG_SHARED 1 00:11:39.555 #undef SPDK_CONFIG_SMA 00:11:39.555 #define SPDK_CONFIG_TESTS 1 00:11:39.555 #undef SPDK_CONFIG_TSAN 00:11:39.555 #define SPDK_CONFIG_UBLK 1 00:11:39.555 #define SPDK_CONFIG_UBSAN 1 00:11:39.555 #undef SPDK_CONFIG_UNIT_TESTS 00:11:39.555 #undef SPDK_CONFIG_URING 00:11:39.555 #define SPDK_CONFIG_URING_PATH 00:11:39.555 #undef SPDK_CONFIG_URING_ZNS 00:11:39.555 #define SPDK_CONFIG_USDT 1 00:11:39.555 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:39.555 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:39.555 #define SPDK_CONFIG_VFIO_USER 1 00:11:39.555 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:39.555 #define SPDK_CONFIG_VHOST 1 00:11:39.555 #define SPDK_CONFIG_VIRTIO 1 00:11:39.555 #undef SPDK_CONFIG_VTUNE 00:11:39.555 #define SPDK_CONFIG_VTUNE_DIR 00:11:39.555 #define SPDK_CONFIG_WERROR 1 00:11:39.555 #define SPDK_CONFIG_WPDK_DIR 00:11:39.555 #undef SPDK_CONFIG_XNVME 00:11:39.555 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.555 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:39.556 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:39.557 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j10 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 84265 ]] 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 84265 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.u92WA6 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.u92WA6/tests/target /tmp/spdk.u92WA6 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13249024000 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6336229376 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=devtmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4194304 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4194304 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6256394240 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266425344 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=2486431744 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=2506571776 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=20140032 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13249024000 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6336229376 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6266277888 00:11:39.558 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266429440 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=151552 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext4 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=840085504 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1012768768 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=103477248 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda3 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=vfat 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91617280 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=104607744 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12990464 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=1253269504 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1253281792 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=fuse.sshfs 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97599352832 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=105088212992 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=2103427072 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:39.559 * Looking for test storage... 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/home 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=13249024000 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == tmpfs ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == ramfs ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ /home == / ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.559 --rc genhtml_branch_coverage=1 00:11:39.559 --rc genhtml_function_coverage=1 00:11:39.559 --rc genhtml_legend=1 00:11:39.559 --rc geninfo_all_blocks=1 00:11:39.559 --rc geninfo_unexecuted_blocks=1 00:11:39.559 00:11:39.559 ' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.559 --rc genhtml_branch_coverage=1 00:11:39.559 --rc genhtml_function_coverage=1 00:11:39.559 --rc genhtml_legend=1 00:11:39.559 --rc geninfo_all_blocks=1 00:11:39.559 --rc geninfo_unexecuted_blocks=1 00:11:39.559 00:11:39.559 ' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.559 --rc genhtml_branch_coverage=1 00:11:39.559 --rc genhtml_function_coverage=1 00:11:39.559 --rc genhtml_legend=1 00:11:39.559 --rc geninfo_all_blocks=1 00:11:39.559 --rc geninfo_unexecuted_blocks=1 00:11:39.559 00:11:39.559 ' 00:11:39.559 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.560 --rc genhtml_branch_coverage=1 00:11:39.560 --rc genhtml_function_coverage=1 00:11:39.560 --rc genhtml_legend=1 00:11:39.560 --rc geninfo_all_blocks=1 00:11:39.560 --rc geninfo_unexecuted_blocks=1 00:11:39.560 00:11:39.560 ' 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.560 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:39.560 Cannot find device "nvmf_init_br" 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:39.560 Cannot find device "nvmf_init_br2" 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:39.560 Cannot find device "nvmf_tgt_br" 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.560 Cannot find device "nvmf_tgt_br2" 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:39.560 Cannot find device "nvmf_init_br" 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:39.560 Cannot find device "nvmf_init_br2" 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:11:39.560 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:39.560 Cannot find device "nvmf_tgt_br" 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:39.561 Cannot find device "nvmf_tgt_br2" 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:39.561 Cannot find device "nvmf_br" 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:39.561 Cannot find device "nvmf_init_if" 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:39.561 Cannot find device "nvmf_init_if2" 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:39.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:39.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:39.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:39.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:11:39.561 00:11:39.561 --- 10.0.0.3 ping statistics --- 00:11:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.561 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:39.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:39.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:11:39.561 00:11:39.561 --- 10.0.0.4 ping statistics --- 00:11:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.561 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:39.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:39.561 00:11:39.561 --- 10.0.0.1 ping statistics --- 00:11:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.561 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:39.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:39.561 00:11:39.561 --- 10.0.0.2 ping statistics --- 00:11:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.561 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # return 0 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.561 ************************************ 00:11:39.561 START TEST nvmf_filesystem_no_in_capsule 00:11:39.561 ************************************ 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=84453 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 84453 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 84453 ']' 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.561 17:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.561 [2024-11-20 17:32:20.814816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:39.561 [2024-11-20 17:32:20.814912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.561 [2024-11-20 17:32:20.959722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.562 [2024-11-20 17:32:21.049056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.562 [2024-11-20 17:32:21.049155] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.562 [2024-11-20 17:32:21.049170] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.562 [2024-11-20 17:32:21.049181] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.562 [2024-11-20 17:32:21.049190] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.562 [2024-11-20 17:32:21.050524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.562 [2024-11-20 17:32:21.050710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.562 [2024-11-20 17:32:21.050869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.562 [2024-11-20 17:32:21.051301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.562 [2024-11-20 17:32:21.272327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.562 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 Malloc1 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 [2024-11-20 17:32:21.498488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:39.820 { 00:11:39.820 "aliases": [ 00:11:39.820 "b46e7ee2-5f6f-4c02-90ab-2f540c4f3511" 00:11:39.820 ], 00:11:39.820 "assigned_rate_limits": { 00:11:39.820 "r_mbytes_per_sec": 0, 00:11:39.820 "rw_ios_per_sec": 0, 00:11:39.820 "rw_mbytes_per_sec": 0, 00:11:39.820 "w_mbytes_per_sec": 0 00:11:39.820 }, 00:11:39.820 "block_size": 512, 00:11:39.820 "claim_type": "exclusive_write", 00:11:39.820 "claimed": true, 00:11:39.820 "driver_specific": {}, 00:11:39.820 "memory_domains": [ 00:11:39.820 { 00:11:39.820 "dma_device_id": "system", 00:11:39.820 "dma_device_type": 1 00:11:39.820 }, 00:11:39.820 { 00:11:39.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.820 "dma_device_type": 2 00:11:39.820 } 00:11:39.820 ], 00:11:39.820 "name": "Malloc1", 00:11:39.820 "num_blocks": 1048576, 00:11:39.820 "product_name": "Malloc disk", 00:11:39.820 "supported_io_types": { 00:11:39.820 "abort": true, 00:11:39.820 "compare": false, 00:11:39.820 "compare_and_write": false, 00:11:39.820 "copy": true, 00:11:39.820 "flush": true, 00:11:39.820 "get_zone_info": false, 00:11:39.820 "nvme_admin": false, 00:11:39.820 "nvme_io": false, 00:11:39.820 "nvme_io_md": false, 00:11:39.820 "nvme_iov_md": false, 00:11:39.820 "read": true, 00:11:39.820 "reset": true, 00:11:39.820 "seek_data": false, 00:11:39.820 "seek_hole": false, 00:11:39.820 "unmap": true, 00:11:39.820 "write": true, 00:11:39.820 "write_zeroes": true, 00:11:39.820 "zcopy": true, 00:11:39.820 "zone_append": false, 00:11:39.820 "zone_management": false 00:11:39.820 }, 00:11:39.820 "uuid": "b46e7ee2-5f6f-4c02-90ab-2f540c4f3511", 00:11:39.820 "zoned": false 00:11:39.820 } 00:11:39.820 ]' 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:39.820 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:40.079 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:42.030 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:42.288 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:42.289 17:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:43.223 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:43.223 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:43.223 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:43.223 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.223 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.481 ************************************ 00:11:43.481 START TEST filesystem_ext4 00:11:43.481 ************************************ 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:43.481 mke2fs 1.47.0 (5-Feb-2023) 00:11:43.481 Discarding device blocks: 0/522240 done 00:11:43.481 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:43.481 Filesystem UUID: 7cba0a0d-430a-4dea-aed3-9f424a0ab4f8 00:11:43.481 Superblock backups stored on blocks: 00:11:43.481 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:43.481 00:11:43.481 Allocating group tables: 0/64 done 00:11:43.481 Writing inode tables: 0/64 done 00:11:43.481 Creating journal (8192 blocks): done 00:11:43.481 Writing superblocks and filesystem accounting information: 0/64 done 00:11:43.481 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:43.481 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.745 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 84453 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.004 00:11:49.004 real 0m5.653s 00:11:49.004 user 0m0.025s 00:11:49.004 sys 0m0.064s 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:49.004 ************************************ 00:11:49.004 END TEST filesystem_ext4 00:11:49.004 ************************************ 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.004 ************************************ 00:11:49.004 START TEST filesystem_btrfs 00:11:49.004 ************************************ 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.004 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:49.005 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:49.005 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.005 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:49.005 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:49.005 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:49.005 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:49.005 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:49.263 btrfs-progs v6.8.1 00:11:49.263 See https://btrfs.readthedocs.io for more information. 00:11:49.263 00:11:49.263 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:49.263 NOTE: several default settings have changed in version 5.15, please make sure 00:11:49.263 this does not affect your deployments: 00:11:49.263 - DUP for metadata (-m dup) 00:11:49.263 - enabled no-holes (-O no-holes) 00:11:49.263 - enabled free-space-tree (-R free-space-tree) 00:11:49.263 00:11:49.263 Label: (null) 00:11:49.263 UUID: 9e13a97a-661c-425d-a361-f5e52ac7d8c0 00:11:49.263 Node size: 16384 00:11:49.263 Sector size: 4096 (CPU page size: 4096) 00:11:49.263 Filesystem size: 510.00MiB 00:11:49.263 Block group profiles: 00:11:49.263 Data: single 8.00MiB 00:11:49.263 Metadata: DUP 32.00MiB 00:11:49.263 System: DUP 8.00MiB 00:11:49.263 SSD detected: yes 00:11:49.263 Zoned device: no 00:11:49.263 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:49.263 Checksum: crc32c 00:11:49.263 Number of devices: 1 00:11:49.263 Devices: 00:11:49.263 ID SIZE PATH 00:11:49.263 1 510.00MiB /dev/nvme0n1p1 00:11:49.263 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 84453 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.263 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.263 00:11:49.264 real 0m0.282s 00:11:49.264 user 0m0.022s 00:11:49.264 sys 0m0.068s 00:11:49.264 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.264 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.264 ************************************ 00:11:49.264 END TEST filesystem_btrfs 00:11:49.264 ************************************ 00:11:49.264 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:49.264 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.264 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.264 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.522 ************************************ 00:11:49.522 START TEST filesystem_xfs 00:11:49.522 ************************************ 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:49.522 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:49.522 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:49.522 = sectsz=512 attr=2, projid32bit=1 00:11:49.522 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:49.522 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:49.522 data = bsize=4096 blocks=130560, imaxpct=25 00:11:49.522 = sunit=0 swidth=0 blks 00:11:49.522 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:49.522 log =internal log bsize=4096 blocks=16384, version=2 00:11:49.522 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:49.522 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:50.457 Discarding blocks...Done. 00:11:50.457 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:50.457 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 84453 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.988 00:11:52.988 real 0m3.222s 00:11:52.988 user 0m0.027s 00:11:52.988 sys 0m0.059s 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:52.988 ************************************ 00:11:52.988 END TEST filesystem_xfs 00:11:52.988 ************************************ 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 84453 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 84453 ']' 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 84453 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84453 00:11:52.988 killing process with pid 84453 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84453' 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 84453 00:11:52.988 17:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 84453 00:11:53.247 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:53.247 00:11:53.247 real 0m14.295s 00:11:53.247 user 0m54.721s 00:11:53.247 sys 0m1.709s 00:11:53.247 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.247 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.247 ************************************ 00:11:53.247 END TEST nvmf_filesystem_no_in_capsule 00:11:53.247 ************************************ 00:11:53.506 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:53.506 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:53.506 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.506 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.506 ************************************ 00:11:53.507 START TEST nvmf_filesystem_in_capsule 00:11:53.507 ************************************ 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=84817 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 84817 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 84817 ']' 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.507 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.507 [2024-11-20 17:32:35.144405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:53.507 [2024-11-20 17:32:35.144635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.507 [2024-11-20 17:32:35.280026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.766 [2024-11-20 17:32:35.352749] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.766 [2024-11-20 17:32:35.353123] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.766 [2024-11-20 17:32:35.353273] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.766 [2024-11-20 17:32:35.353321] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.766 [2024-11-20 17:32:35.353416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.766 [2024-11-20 17:32:35.353637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.766 [2024-11-20 17:32:35.353736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.766 [2024-11-20 17:32:35.355102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.766 [2024-11-20 17:32:35.355126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.766 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.766 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:53.766 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:53.766 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.766 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 [2024-11-20 17:32:35.576522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 Malloc1 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 [2024-11-20 17:32:35.770178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:54.025 { 00:11:54.025 "aliases": [ 00:11:54.025 "2b62e6af-7d9d-4754-be12-3e219f0dc69c" 00:11:54.025 ], 00:11:54.025 "assigned_rate_limits": { 00:11:54.025 "r_mbytes_per_sec": 0, 00:11:54.025 "rw_ios_per_sec": 0, 00:11:54.025 "rw_mbytes_per_sec": 0, 00:11:54.025 "w_mbytes_per_sec": 0 00:11:54.025 }, 00:11:54.025 "block_size": 512, 00:11:54.025 "claim_type": "exclusive_write", 00:11:54.025 "claimed": true, 00:11:54.025 "driver_specific": {}, 00:11:54.025 "memory_domains": [ 00:11:54.025 { 00:11:54.025 "dma_device_id": "system", 00:11:54.025 "dma_device_type": 1 00:11:54.025 }, 00:11:54.025 { 00:11:54.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.025 "dma_device_type": 2 00:11:54.025 } 00:11:54.025 ], 00:11:54.025 "name": "Malloc1", 00:11:54.025 "num_blocks": 1048576, 00:11:54.025 "product_name": "Malloc disk", 00:11:54.025 "supported_io_types": { 00:11:54.025 "abort": true, 00:11:54.025 "compare": false, 00:11:54.025 "compare_and_write": false, 00:11:54.025 "copy": true, 00:11:54.025 "flush": true, 00:11:54.025 "get_zone_info": false, 00:11:54.025 "nvme_admin": false, 00:11:54.025 "nvme_io": false, 00:11:54.025 "nvme_io_md": false, 00:11:54.025 "nvme_iov_md": false, 00:11:54.025 "read": true, 00:11:54.025 "reset": true, 00:11:54.025 "seek_data": false, 00:11:54.025 "seek_hole": false, 00:11:54.025 "unmap": true, 00:11:54.025 "write": true, 00:11:54.025 "write_zeroes": true, 00:11:54.025 "zcopy": true, 00:11:54.025 "zone_append": false, 00:11:54.025 "zone_management": false 00:11:54.025 }, 00:11:54.025 "uuid": "2b62e6af-7d9d-4754-be12-3e219f0dc69c", 00:11:54.025 "zoned": false 00:11:54.025 } 00:11:54.025 ]' 00:11:54.025 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:54.284 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:54.284 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:54.284 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:54.284 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:54.284 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:54.284 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:54.284 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:54.542 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.542 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.542 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.542 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.542 17:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:56.444 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:56.703 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:57.636 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:57.636 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:57.636 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:57.636 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 ************************************ 00:11:57.637 START TEST filesystem_in_capsule_ext4 00:11:57.637 ************************************ 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:57.637 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:57.637 mke2fs 1.47.0 (5-Feb-2023) 00:11:57.637 Discarding device blocks: 0/522240 done 00:11:57.637 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:57.637 Filesystem UUID: b93ba7a3-70f0-4997-a44a-387032400c6c 00:11:57.637 Superblock backups stored on blocks: 00:11:57.637 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:57.637 00:11:57.637 Allocating group tables: 0/64 done 00:11:57.637 Writing inode tables: 0/64 done 00:11:57.894 Creating journal (8192 blocks): done 00:11:57.894 Writing superblocks and filesystem accounting information: 0/64 done 00:11:57.894 00:11:57.894 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:57.894 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 84817 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.162 00:12:03.162 real 0m5.661s 00:12:03.162 user 0m0.025s 00:12:03.162 sys 0m0.070s 00:12:03.162 ************************************ 00:12:03.162 END TEST filesystem_in_capsule_ext4 00:12:03.162 ************************************ 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.162 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.420 ************************************ 00:12:03.420 START TEST filesystem_in_capsule_btrfs 00:12:03.420 ************************************ 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:03.420 17:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.420 btrfs-progs v6.8.1 00:12:03.420 See https://btrfs.readthedocs.io for more information. 00:12:03.420 00:12:03.421 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.421 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.421 this does not affect your deployments: 00:12:03.421 - DUP for metadata (-m dup) 00:12:03.421 - enabled no-holes (-O no-holes) 00:12:03.421 - enabled free-space-tree (-R free-space-tree) 00:12:03.421 00:12:03.421 Label: (null) 00:12:03.421 UUID: fb71dcf6-91b2-4d2d-b4a1-ec5006ce9898 00:12:03.421 Node size: 16384 00:12:03.421 Sector size: 4096 (CPU page size: 4096) 00:12:03.421 Filesystem size: 510.00MiB 00:12:03.421 Block group profiles: 00:12:03.421 Data: single 8.00MiB 00:12:03.421 Metadata: DUP 32.00MiB 00:12:03.421 System: DUP 8.00MiB 00:12:03.421 SSD detected: yes 00:12:03.421 Zoned device: no 00:12:03.421 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.421 Checksum: crc32c 00:12:03.421 Number of devices: 1 00:12:03.421 Devices: 00:12:03.421 ID SIZE PATH 00:12:03.421 1 510.00MiB /dev/nvme0n1p1 00:12:03.421 00:12:03.421 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:03.421 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 84817 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.679 ************************************ 00:12:03.679 END TEST filesystem_in_capsule_btrfs 00:12:03.679 ************************************ 00:12:03.679 00:12:03.679 real 0m0.323s 00:12:03.679 user 0m0.017s 00:12:03.679 sys 0m0.073s 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.679 ************************************ 00:12:03.679 START TEST filesystem_in_capsule_xfs 00:12:03.679 ************************************ 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:03.679 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.679 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.679 = sectsz=512 attr=2, projid32bit=1 00:12:03.679 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.679 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.679 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.679 = sunit=0 swidth=0 blks 00:12:03.679 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.679 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.679 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.679 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:04.614 Discarding blocks...Done. 00:12:04.614 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:04.614 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.515 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 84817 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.516 ************************************ 00:12:06.516 END TEST filesystem_in_capsule_xfs 00:12:06.516 ************************************ 00:12:06.516 00:12:06.516 real 0m2.618s 00:12:06.516 user 0m0.020s 00:12:06.516 sys 0m0.059s 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.516 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 84817 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 84817 ']' 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 84817 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84817 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.516 killing process with pid 84817 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84817' 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 84817 00:12:06.516 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 84817 00:12:07.083 ************************************ 00:12:07.084 END TEST nvmf_filesystem_in_capsule 00:12:07.084 ************************************ 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.084 00:12:07.084 real 0m13.550s 00:12:07.084 user 0m51.957s 00:12:07.084 sys 0m1.699s 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.084 rmmod nvme_tcp 00:12:07.084 rmmod nvme_fabrics 00:12:07.084 rmmod nvme_keyring 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:07.084 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.343 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:12:07.343 00:12:07.343 real 0m29.125s 00:12:07.343 user 1m47.131s 00:12:07.343 sys 0m3.964s 00:12:07.343 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.343 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.343 ************************************ 00:12:07.343 END TEST nvmf_filesystem 00:12:07.343 ************************************ 00:12:07.343 17:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:07.343 17:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.343 17:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.343 17:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.343 ************************************ 00:12:07.343 START TEST nvmf_target_discovery 00:12:07.343 ************************************ 00:12:07.343 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:07.608 * Looking for test storage... 00:12:07.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.608 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:07.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.609 --rc genhtml_branch_coverage=1 00:12:07.609 --rc genhtml_function_coverage=1 00:12:07.609 --rc genhtml_legend=1 00:12:07.609 --rc geninfo_all_blocks=1 00:12:07.609 --rc geninfo_unexecuted_blocks=1 00:12:07.609 00:12:07.609 ' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:07.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.609 --rc genhtml_branch_coverage=1 00:12:07.609 --rc genhtml_function_coverage=1 00:12:07.609 --rc genhtml_legend=1 00:12:07.609 --rc geninfo_all_blocks=1 00:12:07.609 --rc geninfo_unexecuted_blocks=1 00:12:07.609 00:12:07.609 ' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:07.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.609 --rc genhtml_branch_coverage=1 00:12:07.609 --rc genhtml_function_coverage=1 00:12:07.609 --rc genhtml_legend=1 00:12:07.609 --rc geninfo_all_blocks=1 00:12:07.609 --rc geninfo_unexecuted_blocks=1 00:12:07.609 00:12:07.609 ' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:07.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.609 --rc genhtml_branch_coverage=1 00:12:07.609 --rc genhtml_function_coverage=1 00:12:07.609 --rc genhtml_legend=1 00:12:07.609 --rc geninfo_all_blocks=1 00:12:07.609 --rc geninfo_unexecuted_blocks=1 00:12:07.609 00:12:07.609 ' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.609 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.609 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:07.610 Cannot find device "nvmf_init_br" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:07.610 Cannot find device "nvmf_init_br2" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:07.610 Cannot find device "nvmf_tgt_br" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.610 Cannot find device "nvmf_tgt_br2" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:07.610 Cannot find device "nvmf_init_br" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:07.610 Cannot find device "nvmf_init_br2" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:07.610 Cannot find device "nvmf_tgt_br" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:07.610 Cannot find device "nvmf_tgt_br2" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:07.610 Cannot find device "nvmf_br" 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:12:07.610 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:07.890 Cannot find device "nvmf_init_if" 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:07.890 Cannot find device "nvmf_init_if2" 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:07.890 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:07.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:07.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:12:07.891 00:12:07.891 --- 10.0.0.3 ping statistics --- 00:12:07.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.891 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:07.891 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:07.891 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:12:07.891 00:12:07.891 --- 10.0.0.4 ping statistics --- 00:12:07.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.891 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:07.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:07.891 00:12:07.891 --- 10.0.0.1 ping statistics --- 00:12:07.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.891 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:07.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:07.891 00:12:07.891 --- 10.0.0.2 ping statistics --- 00:12:07.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.891 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # return 0 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:07.891 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:08.155 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:08.155 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:08.155 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.155 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.155 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=85395 00:12:08.155 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.155 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 85395 00:12:08.156 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 85395 ']' 00:12:08.156 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.156 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.156 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.156 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.156 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.156 [2024-11-20 17:32:49.763471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:08.156 [2024-11-20 17:32:49.763581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.156 [2024-11-20 17:32:49.907953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.463 [2024-11-20 17:32:50.012230] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.463 [2024-11-20 17:32:50.012290] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.463 [2024-11-20 17:32:50.012305] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.463 [2024-11-20 17:32:50.012316] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.463 [2024-11-20 17:32:50.012324] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.463 [2024-11-20 17:32:50.012515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.463 [2024-11-20 17:32:50.012645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.463 [2024-11-20 17:32:50.013547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.463 [2024-11-20 17:32:50.013611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.037 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.037 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:09.037 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:09.037 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:09.037 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 [2024-11-20 17:32:50.858423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 Null1 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 [2024-11-20 17:32:50.903090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 Null2 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.297 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 Null3 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 Null4 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 4420 00:12:09.558 00:12:09.558 Discovery Log Number of Records 6, Generation counter 6 00:12:09.558 =====Discovery Log Entry 0====== 00:12:09.558 trtype: tcp 00:12:09.558 adrfam: ipv4 00:12:09.558 subtype: current discovery subsystem 00:12:09.558 treq: not required 00:12:09.558 portid: 0 00:12:09.558 trsvcid: 4420 00:12:09.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.558 traddr: 10.0.0.3 00:12:09.558 eflags: explicit discovery connections, duplicate discovery information 00:12:09.558 sectype: none 00:12:09.558 =====Discovery Log Entry 1====== 00:12:09.558 trtype: tcp 00:12:09.558 adrfam: ipv4 00:12:09.558 subtype: nvme subsystem 00:12:09.558 treq: not required 00:12:09.558 portid: 0 00:12:09.558 trsvcid: 4420 00:12:09.558 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:09.558 traddr: 10.0.0.3 00:12:09.558 eflags: none 00:12:09.558 sectype: none 00:12:09.558 =====Discovery Log Entry 2====== 00:12:09.558 trtype: tcp 00:12:09.558 adrfam: ipv4 00:12:09.558 subtype: nvme subsystem 00:12:09.558 treq: not required 00:12:09.558 portid: 0 00:12:09.558 trsvcid: 4420 00:12:09.558 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:09.558 traddr: 10.0.0.3 00:12:09.558 eflags: none 00:12:09.558 sectype: none 00:12:09.558 =====Discovery Log Entry 3====== 00:12:09.558 trtype: tcp 00:12:09.558 adrfam: ipv4 00:12:09.558 subtype: nvme subsystem 00:12:09.558 treq: not required 00:12:09.558 portid: 0 00:12:09.558 trsvcid: 4420 00:12:09.558 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:09.558 traddr: 10.0.0.3 00:12:09.558 eflags: none 00:12:09.558 sectype: none 00:12:09.558 =====Discovery Log Entry 4====== 00:12:09.558 trtype: tcp 00:12:09.558 adrfam: ipv4 00:12:09.558 subtype: nvme subsystem 00:12:09.558 treq: not required 00:12:09.558 portid: 0 00:12:09.558 trsvcid: 4420 00:12:09.558 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:09.558 traddr: 10.0.0.3 00:12:09.558 eflags: none 00:12:09.558 sectype: none 00:12:09.558 =====Discovery Log Entry 5====== 00:12:09.558 trtype: tcp 00:12:09.558 adrfam: ipv4 00:12:09.558 subtype: discovery subsystem referral 00:12:09.558 treq: not required 00:12:09.558 portid: 0 00:12:09.558 trsvcid: 4430 00:12:09.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.558 traddr: 10.0.0.3 00:12:09.558 eflags: none 00:12:09.558 sectype: none 00:12:09.558 Perform nvmf subsystem discovery via RPC 00:12:09.558 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:09.558 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:09.558 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.558 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.558 [ 00:12:09.558 { 00:12:09.558 "allow_any_host": true, 00:12:09.558 "hosts": [], 00:12:09.558 "listen_addresses": [ 00:12:09.558 { 00:12:09.558 "adrfam": "IPv4", 00:12:09.558 "traddr": "10.0.0.3", 00:12:09.558 "trsvcid": "4420", 00:12:09.558 "trtype": "TCP" 00:12:09.558 } 00:12:09.558 ], 00:12:09.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.558 "subtype": "Discovery" 00:12:09.558 }, 00:12:09.558 { 00:12:09.558 "allow_any_host": true, 00:12:09.558 "hosts": [], 00:12:09.558 "listen_addresses": [ 00:12:09.558 { 00:12:09.558 "adrfam": "IPv4", 00:12:09.558 "traddr": "10.0.0.3", 00:12:09.558 "trsvcid": "4420", 00:12:09.558 "trtype": "TCP" 00:12:09.558 } 00:12:09.558 ], 00:12:09.558 "max_cntlid": 65519, 00:12:09.558 "max_namespaces": 32, 00:12:09.558 "min_cntlid": 1, 00:12:09.558 "model_number": "SPDK bdev Controller", 00:12:09.558 "namespaces": [ 00:12:09.558 { 00:12:09.558 "bdev_name": "Null1", 00:12:09.558 "name": "Null1", 00:12:09.558 "nguid": "8F471474B281468187DE1E7D81812917", 00:12:09.558 "nsid": 1, 00:12:09.558 "uuid": "8f471474-b281-4681-87de-1e7d81812917" 00:12:09.558 } 00:12:09.558 ], 00:12:09.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.558 "serial_number": "SPDK00000000000001", 00:12:09.558 "subtype": "NVMe" 00:12:09.558 }, 00:12:09.558 { 00:12:09.558 "allow_any_host": true, 00:12:09.558 "hosts": [], 00:12:09.558 "listen_addresses": [ 00:12:09.558 { 00:12:09.558 "adrfam": "IPv4", 00:12:09.558 "traddr": "10.0.0.3", 00:12:09.558 "trsvcid": "4420", 00:12:09.558 "trtype": "TCP" 00:12:09.558 } 00:12:09.558 ], 00:12:09.558 "max_cntlid": 65519, 00:12:09.558 "max_namespaces": 32, 00:12:09.558 "min_cntlid": 1, 00:12:09.559 "model_number": "SPDK bdev Controller", 00:12:09.559 "namespaces": [ 00:12:09.559 { 00:12:09.559 "bdev_name": "Null2", 00:12:09.559 "name": "Null2", 00:12:09.559 "nguid": "14E1D031B7514CB098248C6C208775B3", 00:12:09.559 "nsid": 1, 00:12:09.559 "uuid": "14e1d031-b751-4cb0-9824-8c6c208775b3" 00:12:09.559 } 00:12:09.559 ], 00:12:09.559 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:09.559 "serial_number": "SPDK00000000000002", 00:12:09.559 "subtype": "NVMe" 00:12:09.559 }, 00:12:09.559 { 00:12:09.559 "allow_any_host": true, 00:12:09.559 "hosts": [], 00:12:09.559 "listen_addresses": [ 00:12:09.559 { 00:12:09.559 "adrfam": "IPv4", 00:12:09.559 "traddr": "10.0.0.3", 00:12:09.559 "trsvcid": "4420", 00:12:09.559 "trtype": "TCP" 00:12:09.559 } 00:12:09.559 ], 00:12:09.559 "max_cntlid": 65519, 00:12:09.559 "max_namespaces": 32, 00:12:09.559 "min_cntlid": 1, 00:12:09.559 "model_number": "SPDK bdev Controller", 00:12:09.559 "namespaces": [ 00:12:09.559 { 00:12:09.559 "bdev_name": "Null3", 00:12:09.559 "name": "Null3", 00:12:09.559 "nguid": "3DA160BFAB534147844278360A1E681A", 00:12:09.559 "nsid": 1, 00:12:09.559 "uuid": "3da160bf-ab53-4147-8442-78360a1e681a" 00:12:09.559 } 00:12:09.559 ], 00:12:09.559 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:09.559 "serial_number": "SPDK00000000000003", 00:12:09.559 "subtype": "NVMe" 00:12:09.559 }, 00:12:09.559 { 00:12:09.559 "allow_any_host": true, 00:12:09.559 "hosts": [], 00:12:09.559 "listen_addresses": [ 00:12:09.559 { 00:12:09.559 "adrfam": "IPv4", 00:12:09.559 "traddr": "10.0.0.3", 00:12:09.559 "trsvcid": "4420", 00:12:09.559 "trtype": "TCP" 00:12:09.559 } 00:12:09.559 ], 00:12:09.559 "max_cntlid": 65519, 00:12:09.559 "max_namespaces": 32, 00:12:09.559 "min_cntlid": 1, 00:12:09.559 "model_number": "SPDK bdev Controller", 00:12:09.559 "namespaces": [ 00:12:09.559 { 00:12:09.559 "bdev_name": "Null4", 00:12:09.559 "name": "Null4", 00:12:09.559 "nguid": "3460B823174F4853ADCDD7D272AB0EFC", 00:12:09.559 "nsid": 1, 00:12:09.559 "uuid": "3460b823-174f-4853-adcd-d7d272ab0efc" 00:12:09.559 } 00:12:09.559 ], 00:12:09.559 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:09.559 "serial_number": "SPDK00000000000004", 00:12:09.559 "subtype": "NVMe" 00:12:09.559 } 00:12:09.559 ] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.559 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.559 rmmod nvme_tcp 00:12:09.819 rmmod nvme_fabrics 00:12:09.819 rmmod nvme_keyring 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 85395 ']' 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 85395 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 85395 ']' 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 85395 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85395 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.819 killing process with pid 85395 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85395' 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 85395 00:12:09.819 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 85395 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:10.078 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:10.079 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:10.079 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:10.079 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:10.079 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:10.079 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:12:10.338 00:12:10.338 real 0m2.888s 00:12:10.338 user 0m7.082s 00:12:10.338 sys 0m0.783s 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.338 ************************************ 00:12:10.338 END TEST nvmf_target_discovery 00:12:10.338 ************************************ 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.338 ************************************ 00:12:10.338 START TEST nvmf_referrals 00:12:10.338 ************************************ 00:12:10.338 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:10.338 * Looking for test storage... 00:12:10.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:10.338 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:10.338 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:10.338 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.599 --rc genhtml_branch_coverage=1 00:12:10.599 --rc genhtml_function_coverage=1 00:12:10.599 --rc genhtml_legend=1 00:12:10.599 --rc geninfo_all_blocks=1 00:12:10.599 --rc geninfo_unexecuted_blocks=1 00:12:10.599 00:12:10.599 ' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.599 --rc genhtml_branch_coverage=1 00:12:10.599 --rc genhtml_function_coverage=1 00:12:10.599 --rc genhtml_legend=1 00:12:10.599 --rc geninfo_all_blocks=1 00:12:10.599 --rc geninfo_unexecuted_blocks=1 00:12:10.599 00:12:10.599 ' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.599 --rc genhtml_branch_coverage=1 00:12:10.599 --rc genhtml_function_coverage=1 00:12:10.599 --rc genhtml_legend=1 00:12:10.599 --rc geninfo_all_blocks=1 00:12:10.599 --rc geninfo_unexecuted_blocks=1 00:12:10.599 00:12:10.599 ' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.599 --rc genhtml_branch_coverage=1 00:12:10.599 --rc genhtml_function_coverage=1 00:12:10.599 --rc genhtml_legend=1 00:12:10.599 --rc geninfo_all_blocks=1 00:12:10.599 --rc geninfo_unexecuted_blocks=1 00:12:10.599 00:12:10.599 ' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.599 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:10.600 Cannot find device "nvmf_init_br" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:10.600 Cannot find device "nvmf_init_br2" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:10.600 Cannot find device "nvmf_tgt_br" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.600 Cannot find device "nvmf_tgt_br2" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:10.600 Cannot find device "nvmf_init_br" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:10.600 Cannot find device "nvmf_init_br2" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:10.600 Cannot find device "nvmf_tgt_br" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:10.600 Cannot find device "nvmf_tgt_br2" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:10.600 Cannot find device "nvmf_br" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:10.600 Cannot find device "nvmf_init_if" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:10.600 Cannot find device "nvmf_init_if2" 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.600 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:10.859 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:10.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:10.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:12:10.860 00:12:10.860 --- 10.0.0.3 ping statistics --- 00:12:10.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.860 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:10.860 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:10.860 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:12:10.860 00:12:10.860 --- 10.0.0.4 ping statistics --- 00:12:10.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.860 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:10.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:10.860 00:12:10.860 --- 10.0.0.1 ping statistics --- 00:12:10.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.860 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:10.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:12:10.860 00:12:10.860 --- 10.0.0.2 ping statistics --- 00:12:10.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.860 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # return 0 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:10.860 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=85679 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 85679 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 85679 ']' 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.119 17:32:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.119 [2024-11-20 17:32:52.734263] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:11.119 [2024-11-20 17:32:52.734364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.119 [2024-11-20 17:32:52.881396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.378 [2024-11-20 17:32:52.985816] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.378 [2024-11-20 17:32:52.985888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.378 [2024-11-20 17:32:52.985910] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.378 [2024-11-20 17:32:52.985920] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.378 [2024-11-20 17:32:52.985929] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.378 [2024-11-20 17:32:52.986114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.378 [2024-11-20 17:32:52.986920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.378 [2024-11-20 17:32:52.987505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.378 [2024-11-20 17:32:52.987567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.944 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.944 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:11.944 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:11.944 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.944 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.203 [2024-11-20 17:32:53.768405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.203 [2024-11-20 17:32:53.784658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:12.203 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.204 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.462 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:12.463 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:12.721 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.981 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.240 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:13.240 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:13.240 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:13.240 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:13.241 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:13.241 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:13.241 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:13.241 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:13.241 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:13.241 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:13.241 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:13.500 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.759 rmmod nvme_tcp 00:12:13.759 rmmod nvme_fabrics 00:12:13.759 rmmod nvme_keyring 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 85679 ']' 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 85679 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 85679 ']' 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 85679 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85679 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85679' 00:12:13.759 killing process with pid 85679 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 85679 00:12:13.759 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 85679 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:14.017 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.275 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:12:14.276 00:12:14.276 real 0m3.997s 00:12:14.276 user 0m11.992s 00:12:14.276 sys 0m1.111s 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.276 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.276 ************************************ 00:12:14.276 END TEST nvmf_referrals 00:12:14.276 ************************************ 00:12:14.276 17:32:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:14.276 17:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:14.276 17:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.276 17:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.276 ************************************ 00:12:14.276 START TEST nvmf_connect_disconnect 00:12:14.276 ************************************ 00:12:14.276 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:14.535 * Looking for test storage... 00:12:14.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:14.535 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:14.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.536 --rc genhtml_branch_coverage=1 00:12:14.536 --rc genhtml_function_coverage=1 00:12:14.536 --rc genhtml_legend=1 00:12:14.536 --rc geninfo_all_blocks=1 00:12:14.536 --rc geninfo_unexecuted_blocks=1 00:12:14.536 00:12:14.536 ' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:14.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.536 --rc genhtml_branch_coverage=1 00:12:14.536 --rc genhtml_function_coverage=1 00:12:14.536 --rc genhtml_legend=1 00:12:14.536 --rc geninfo_all_blocks=1 00:12:14.536 --rc geninfo_unexecuted_blocks=1 00:12:14.536 00:12:14.536 ' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:14.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.536 --rc genhtml_branch_coverage=1 00:12:14.536 --rc genhtml_function_coverage=1 00:12:14.536 --rc genhtml_legend=1 00:12:14.536 --rc geninfo_all_blocks=1 00:12:14.536 --rc geninfo_unexecuted_blocks=1 00:12:14.536 00:12:14.536 ' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:14.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.536 --rc genhtml_branch_coverage=1 00:12:14.536 --rc genhtml_function_coverage=1 00:12:14.536 --rc genhtml_legend=1 00:12:14.536 --rc geninfo_all_blocks=1 00:12:14.536 --rc geninfo_unexecuted_blocks=1 00:12:14.536 00:12:14.536 ' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.536 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:14.536 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:14.537 Cannot find device "nvmf_init_br" 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:14.537 Cannot find device "nvmf_init_br2" 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:14.537 Cannot find device "nvmf_tgt_br" 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:12:14.537 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.795 Cannot find device "nvmf_tgt_br2" 00:12:14.795 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:12:14.795 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:14.795 Cannot find device "nvmf_init_br" 00:12:14.795 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:12:14.795 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:14.795 Cannot find device "nvmf_init_br2" 00:12:14.795 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:12:14.795 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:14.796 Cannot find device "nvmf_tgt_br" 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:14.796 Cannot find device "nvmf_tgt_br2" 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:14.796 Cannot find device "nvmf_br" 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:14.796 Cannot find device "nvmf_init_if" 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:14.796 Cannot find device "nvmf_init_if2" 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:14.796 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.054 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:15.055 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.055 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:12:15.055 00:12:15.055 --- 10.0.0.3 ping statistics --- 00:12:15.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.055 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:15.055 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:15.055 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:12:15.055 00:12:15.055 --- 10.0.0.4 ping statistics --- 00:12:15.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.055 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:15.055 00:12:15.055 --- 10.0.0.1 ping statistics --- 00:12:15.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.055 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:15.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:12:15.055 00:12:15.055 --- 10.0.0.2 ping statistics --- 00:12:15.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.055 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # return 0 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=86039 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 86039 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 86039 ']' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.055 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.055 [2024-11-20 17:32:56.794116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:15.055 [2024-11-20 17:32:56.794220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.313 [2024-11-20 17:32:56.935207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.313 [2024-11-20 17:32:57.031919] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.313 [2024-11-20 17:32:57.031973] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.313 [2024-11-20 17:32:57.031985] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.313 [2024-11-20 17:32:57.031993] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.313 [2024-11-20 17:32:57.032001] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.313 [2024-11-20 17:32:57.032172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.313 [2024-11-20 17:32:57.032292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.313 [2024-11-20 17:32:57.032957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.313 [2024-11-20 17:32:57.032992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.248 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.248 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.249 [2024-11-20 17:32:57.926932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.249 [2024-11-20 17:32:57.986351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:16.249 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:18.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.717 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.717 rmmod nvme_tcp 00:16:01.717 rmmod nvme_fabrics 00:16:01.975 rmmod nvme_keyring 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 86039 ']' 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 86039 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 86039 ']' 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 86039 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.975 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86039 00:16:01.976 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.976 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.976 killing process with pid 86039 00:16:01.976 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86039' 00:16:01.976 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 86039 00:16:01.976 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 86039 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:02.234 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:02.234 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:02.494 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:02.494 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:16:02.495 00:16:02.495 real 3m48.077s 00:16:02.495 user 14m48.754s 00:16:02.495 sys 0m21.533s 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:02.495 ************************************ 00:16:02.495 END TEST nvmf_connect_disconnect 00:16:02.495 ************************************ 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.495 ************************************ 00:16:02.495 START TEST nvmf_multitarget 00:16:02.495 ************************************ 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:02.495 * Looking for test storage... 00:16:02.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:02.495 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:02.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.754 --rc genhtml_branch_coverage=1 00:16:02.754 --rc genhtml_function_coverage=1 00:16:02.754 --rc genhtml_legend=1 00:16:02.754 --rc geninfo_all_blocks=1 00:16:02.754 --rc geninfo_unexecuted_blocks=1 00:16:02.754 00:16:02.754 ' 00:16:02.754 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:02.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.754 --rc genhtml_branch_coverage=1 00:16:02.754 --rc genhtml_function_coverage=1 00:16:02.754 --rc genhtml_legend=1 00:16:02.754 --rc geninfo_all_blocks=1 00:16:02.755 --rc geninfo_unexecuted_blocks=1 00:16:02.755 00:16:02.755 ' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:02.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.755 --rc genhtml_branch_coverage=1 00:16:02.755 --rc genhtml_function_coverage=1 00:16:02.755 --rc genhtml_legend=1 00:16:02.755 --rc geninfo_all_blocks=1 00:16:02.755 --rc geninfo_unexecuted_blocks=1 00:16:02.755 00:16:02.755 ' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:02.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.755 --rc genhtml_branch_coverage=1 00:16:02.755 --rc genhtml_function_coverage=1 00:16:02.755 --rc genhtml_legend=1 00:16:02.755 --rc geninfo_all_blocks=1 00:16:02.755 --rc geninfo_unexecuted_blocks=1 00:16:02.755 00:16:02.755 ' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.755 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:02.755 Cannot find device "nvmf_init_br" 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:02.755 Cannot find device "nvmf_init_br2" 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:02.755 Cannot find device "nvmf_tgt_br" 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.755 Cannot find device "nvmf_tgt_br2" 00:16:02.755 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:02.756 Cannot find device "nvmf_init_br" 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:02.756 Cannot find device "nvmf_init_br2" 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:02.756 Cannot find device "nvmf_tgt_br" 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:02.756 Cannot find device "nvmf_tgt_br2" 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:02.756 Cannot find device "nvmf_br" 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:02.756 Cannot find device "nvmf_init_if" 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:16:02.756 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:03.017 Cannot find device "nvmf_init_if2" 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:03.017 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:03.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:03.276 00:16:03.276 --- 10.0.0.3 ping statistics --- 00:16:03.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.276 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:03.276 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:03.276 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:03.276 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:03.276 00:16:03.276 --- 10.0.0.4 ping statistics --- 00:16:03.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.276 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:03.276 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:03.276 00:16:03.276 --- 10.0.0.1 ping statistics --- 00:16:03.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.276 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:03.276 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:03.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:03.277 00:16:03.277 --- 10.0.0.2 ping statistics --- 00:16:03.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.277 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # return 0 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=89860 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 89860 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 89860 ']' 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.277 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 [2024-11-20 17:36:44.915016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:03.277 [2024-11-20 17:36:44.915140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.277 [2024-11-20 17:36:45.053049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.536 [2024-11-20 17:36:45.160219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.536 [2024-11-20 17:36:45.160276] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.536 [2024-11-20 17:36:45.160288] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.536 [2024-11-20 17:36:45.160296] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.536 [2024-11-20 17:36:45.160304] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.536 [2024-11-20 17:36:45.160375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.536 [2024-11-20 17:36:45.160661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.536 [2024-11-20 17:36:45.161381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.536 [2024-11-20 17:36:45.161388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.472 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.472 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:04.472 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:04.472 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.472 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:04.472 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.472 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:04.472 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:04.472 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:04.472 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:04.472 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:04.730 "nvmf_tgt_1" 00:16:04.730 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:04.730 "nvmf_tgt_2" 00:16:04.730 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:04.730 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:04.988 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:04.988 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:04.988 true 00:16:05.247 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:05.247 true 00:16:05.247 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:05.247 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.506 rmmod nvme_tcp 00:16:05.506 rmmod nvme_fabrics 00:16:05.506 rmmod nvme_keyring 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 89860 ']' 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 89860 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 89860 ']' 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 89860 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89860 00:16:05.506 killing process with pid 89860 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89860' 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 89860 00:16:05.506 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 89860 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:05.764 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:16:06.023 00:16:06.023 real 0m3.601s 00:16:06.023 user 0m10.761s 00:16:06.023 sys 0m0.940s 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.023 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:06.023 ************************************ 00:16:06.023 END TEST nvmf_multitarget 00:16:06.023 ************************************ 00:16:06.282 17:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:06.282 17:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:06.282 17:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.282 17:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.282 ************************************ 00:16:06.282 START TEST nvmf_rpc 00:16:06.282 ************************************ 00:16:06.282 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:06.282 * Looking for test storage... 00:16:06.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:06.282 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:06.283 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:06.283 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.283 --rc genhtml_branch_coverage=1 00:16:06.283 --rc genhtml_function_coverage=1 00:16:06.283 --rc genhtml_legend=1 00:16:06.283 --rc geninfo_all_blocks=1 00:16:06.283 --rc geninfo_unexecuted_blocks=1 00:16:06.283 00:16:06.283 ' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.283 --rc genhtml_branch_coverage=1 00:16:06.283 --rc genhtml_function_coverage=1 00:16:06.283 --rc genhtml_legend=1 00:16:06.283 --rc geninfo_all_blocks=1 00:16:06.283 --rc geninfo_unexecuted_blocks=1 00:16:06.283 00:16:06.283 ' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.283 --rc genhtml_branch_coverage=1 00:16:06.283 --rc genhtml_function_coverage=1 00:16:06.283 --rc genhtml_legend=1 00:16:06.283 --rc geninfo_all_blocks=1 00:16:06.283 --rc geninfo_unexecuted_blocks=1 00:16:06.283 00:16:06.283 ' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.283 --rc genhtml_branch_coverage=1 00:16:06.283 --rc genhtml_function_coverage=1 00:16:06.283 --rc genhtml_legend=1 00:16:06.283 --rc geninfo_all_blocks=1 00:16:06.283 --rc geninfo_unexecuted_blocks=1 00:16:06.283 00:16:06.283 ' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.283 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:06.283 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:06.284 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:06.542 Cannot find device "nvmf_init_br" 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:06.542 Cannot find device "nvmf_init_br2" 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:06.542 Cannot find device "nvmf_tgt_br" 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.542 Cannot find device "nvmf_tgt_br2" 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:06.542 Cannot find device "nvmf_init_br" 00:16:06.542 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:06.543 Cannot find device "nvmf_init_br2" 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:06.543 Cannot find device "nvmf_tgt_br" 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:06.543 Cannot find device "nvmf_tgt_br2" 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:06.543 Cannot find device "nvmf_br" 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:06.543 Cannot find device "nvmf_init_if" 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:06.543 Cannot find device "nvmf_init_if2" 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:06.543 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:06.801 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:06.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:06.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:16:06.802 00:16:06.802 --- 10.0.0.3 ping statistics --- 00:16:06.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.802 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:06.802 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:06.802 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:16:06.802 00:16:06.802 --- 10.0.0.4 ping statistics --- 00:16:06.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.802 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:06.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:16:06.802 00:16:06.802 --- 10.0.0.1 ping statistics --- 00:16:06.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.802 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:06.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:06.802 00:16:06.802 --- 10.0.0.2 ping statistics --- 00:16:06.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.802 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # return 0 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=90159 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 90159 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 90159 ']' 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.802 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.802 [2024-11-20 17:36:48.569942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:06.802 [2024-11-20 17:36:48.570046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.060 [2024-11-20 17:36:48.716755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.060 [2024-11-20 17:36:48.828869] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.060 [2024-11-20 17:36:48.829225] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.060 [2024-11-20 17:36:48.829251] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.060 [2024-11-20 17:36:48.829262] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.060 [2024-11-20 17:36:48.829272] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.060 [2024-11-20 17:36:48.829414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.060 [2024-11-20 17:36:48.830361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.060 [2024-11-20 17:36:48.830447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.060 [2024-11-20 17:36:48.830456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:07.994 "poll_groups": [ 00:16:07.994 { 00:16:07.994 "admin_qpairs": 0, 00:16:07.994 "completed_nvme_io": 0, 00:16:07.994 "current_admin_qpairs": 0, 00:16:07.994 "current_io_qpairs": 0, 00:16:07.994 "io_qpairs": 0, 00:16:07.994 "name": "nvmf_tgt_poll_group_000", 00:16:07.994 "pending_bdev_io": 0, 00:16:07.994 "transports": [] 00:16:07.994 }, 00:16:07.994 { 00:16:07.994 "admin_qpairs": 0, 00:16:07.994 "completed_nvme_io": 0, 00:16:07.994 "current_admin_qpairs": 0, 00:16:07.994 "current_io_qpairs": 0, 00:16:07.994 "io_qpairs": 0, 00:16:07.994 "name": "nvmf_tgt_poll_group_001", 00:16:07.994 "pending_bdev_io": 0, 00:16:07.994 "transports": [] 00:16:07.994 }, 00:16:07.994 { 00:16:07.994 "admin_qpairs": 0, 00:16:07.994 "completed_nvme_io": 0, 00:16:07.994 "current_admin_qpairs": 0, 00:16:07.994 "current_io_qpairs": 0, 00:16:07.994 "io_qpairs": 0, 00:16:07.994 "name": "nvmf_tgt_poll_group_002", 00:16:07.994 "pending_bdev_io": 0, 00:16:07.994 "transports": [] 00:16:07.994 }, 00:16:07.994 { 00:16:07.994 "admin_qpairs": 0, 00:16:07.994 "completed_nvme_io": 0, 00:16:07.994 "current_admin_qpairs": 0, 00:16:07.994 "current_io_qpairs": 0, 00:16:07.994 "io_qpairs": 0, 00:16:07.994 "name": "nvmf_tgt_poll_group_003", 00:16:07.994 "pending_bdev_io": 0, 00:16:07.994 "transports": [] 00:16:07.994 } 00:16:07.994 ], 00:16:07.994 "tick_rate": 2200000000 00:16:07.994 }' 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:07.994 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.253 [2024-11-20 17:36:49.858895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.253 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:08.254 "poll_groups": [ 00:16:08.254 { 00:16:08.254 "admin_qpairs": 0, 00:16:08.254 "completed_nvme_io": 0, 00:16:08.254 "current_admin_qpairs": 0, 00:16:08.254 "current_io_qpairs": 0, 00:16:08.254 "io_qpairs": 0, 00:16:08.254 "name": "nvmf_tgt_poll_group_000", 00:16:08.254 "pending_bdev_io": 0, 00:16:08.254 "transports": [ 00:16:08.254 { 00:16:08.254 "trtype": "TCP" 00:16:08.254 } 00:16:08.254 ] 00:16:08.254 }, 00:16:08.254 { 00:16:08.254 "admin_qpairs": 0, 00:16:08.254 "completed_nvme_io": 0, 00:16:08.254 "current_admin_qpairs": 0, 00:16:08.254 "current_io_qpairs": 0, 00:16:08.254 "io_qpairs": 0, 00:16:08.254 "name": "nvmf_tgt_poll_group_001", 00:16:08.254 "pending_bdev_io": 0, 00:16:08.254 "transports": [ 00:16:08.254 { 00:16:08.254 "trtype": "TCP" 00:16:08.254 } 00:16:08.254 ] 00:16:08.254 }, 00:16:08.254 { 00:16:08.254 "admin_qpairs": 0, 00:16:08.254 "completed_nvme_io": 0, 00:16:08.254 "current_admin_qpairs": 0, 00:16:08.254 "current_io_qpairs": 0, 00:16:08.254 "io_qpairs": 0, 00:16:08.254 "name": "nvmf_tgt_poll_group_002", 00:16:08.254 "pending_bdev_io": 0, 00:16:08.254 "transports": [ 00:16:08.254 { 00:16:08.254 "trtype": "TCP" 00:16:08.254 } 00:16:08.254 ] 00:16:08.254 }, 00:16:08.254 { 00:16:08.254 "admin_qpairs": 0, 00:16:08.254 "completed_nvme_io": 0, 00:16:08.254 "current_admin_qpairs": 0, 00:16:08.254 "current_io_qpairs": 0, 00:16:08.254 "io_qpairs": 0, 00:16:08.254 "name": "nvmf_tgt_poll_group_003", 00:16:08.254 "pending_bdev_io": 0, 00:16:08.254 "transports": [ 00:16:08.254 { 00:16:08.254 "trtype": "TCP" 00:16:08.254 } 00:16:08.254 ] 00:16:08.254 } 00:16:08.254 ], 00:16:08.254 "tick_rate": 2200000000 00:16:08.254 }' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.254 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.254 Malloc1 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.254 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.512 [2024-11-20 17:36:50.042703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:08.512 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.512 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -a 10.0.0.3 -s 4420 00:16:08.512 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -a 10.0.0.3 -s 4420 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -a 10.0.0.3 -s 4420 00:16:08.513 [2024-11-20 17:36:50.071470] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e' 00:16:08.513 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:08.513 could not add new controller: failed to write to nvme-fabrics device 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:08.513 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.042 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:11.043 [2024-11-20 17:36:52.383096] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e' 00:16:11.043 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:11.043 could not add new controller: failed to write to nvme-fabrics device 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:11.043 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:13.016 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.017 [2024-11-20 17:36:54.701009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.017 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:13.277 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:13.277 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.277 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.277 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:13.277 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.182 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.182 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.182 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.182 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:15.182 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.182 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:15.182 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.442 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.442 [2024-11-20 17:36:57.021926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:15.442 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.976 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.977 [2024-11-20 17:36:59.344680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:17.977 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.882 [2024-11-20 17:37:01.657675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:19.882 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.883 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:19.883 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.883 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:20.155 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:22.086 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:22.086 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.086 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:22.345 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:22.345 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.345 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:22.345 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.345 [2024-11-20 17:37:04.077092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.345 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:22.603 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.604 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.604 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.604 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:22.604 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:24.505 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:24.505 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:24.505 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.505 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:24.505 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.505 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:24.505 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.763 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 [2024-11-20 17:37:06.410823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 [2024-11-20 17:37:06.458814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.764 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.765 [2024-11-20 17:37:06.506927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.765 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.025 [2024-11-20 17:37:06.555024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.025 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 [2024-11-20 17:37:06.603192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:25.026 "poll_groups": [ 00:16:25.026 { 00:16:25.026 "admin_qpairs": 2, 00:16:25.026 "completed_nvme_io": 68, 00:16:25.026 "current_admin_qpairs": 0, 00:16:25.026 "current_io_qpairs": 0, 00:16:25.026 "io_qpairs": 16, 00:16:25.026 "name": "nvmf_tgt_poll_group_000", 00:16:25.026 "pending_bdev_io": 0, 00:16:25.026 "transports": [ 00:16:25.026 { 00:16:25.026 "trtype": "TCP" 00:16:25.026 } 00:16:25.026 ] 00:16:25.026 }, 00:16:25.026 { 00:16:25.026 "admin_qpairs": 3, 00:16:25.026 "completed_nvme_io": 117, 00:16:25.026 "current_admin_qpairs": 0, 00:16:25.026 "current_io_qpairs": 0, 00:16:25.026 "io_qpairs": 17, 00:16:25.026 "name": "nvmf_tgt_poll_group_001", 00:16:25.026 "pending_bdev_io": 0, 00:16:25.026 "transports": [ 00:16:25.026 { 00:16:25.026 "trtype": "TCP" 00:16:25.026 } 00:16:25.026 ] 00:16:25.026 }, 00:16:25.026 { 00:16:25.026 "admin_qpairs": 1, 00:16:25.026 "completed_nvme_io": 167, 00:16:25.026 "current_admin_qpairs": 0, 00:16:25.026 "current_io_qpairs": 0, 00:16:25.026 "io_qpairs": 19, 00:16:25.026 "name": "nvmf_tgt_poll_group_002", 00:16:25.026 "pending_bdev_io": 0, 00:16:25.026 "transports": [ 00:16:25.026 { 00:16:25.026 "trtype": "TCP" 00:16:25.026 } 00:16:25.026 ] 00:16:25.026 }, 00:16:25.026 { 00:16:25.026 "admin_qpairs": 1, 00:16:25.026 "completed_nvme_io": 68, 00:16:25.026 "current_admin_qpairs": 0, 00:16:25.026 "current_io_qpairs": 0, 00:16:25.026 "io_qpairs": 18, 00:16:25.026 "name": "nvmf_tgt_poll_group_003", 00:16:25.026 "pending_bdev_io": 0, 00:16:25.026 "transports": [ 00:16:25.026 { 00:16:25.026 "trtype": "TCP" 00:16:25.026 } 00:16:25.026 ] 00:16:25.026 } 00:16:25.026 ], 00:16:25.026 "tick_rate": 2200000000 00:16:25.026 }' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.026 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.286 rmmod nvme_tcp 00:16:25.286 rmmod nvme_fabrics 00:16:25.286 rmmod nvme_keyring 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 90159 ']' 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 90159 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 90159 ']' 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 90159 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90159 00:16:25.286 killing process with pid 90159 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90159' 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 90159 00:16:25.286 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 90159 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:25.545 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:16:25.805 00:16:25.805 real 0m19.592s 00:16:25.805 user 1m12.749s 00:16:25.805 sys 0m2.359s 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.805 ************************************ 00:16:25.805 END TEST nvmf_rpc 00:16:25.805 ************************************ 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.805 ************************************ 00:16:25.805 START TEST nvmf_invalid 00:16:25.805 ************************************ 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:25.805 * Looking for test storage... 00:16:25.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:25.805 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:26.065 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:26.065 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.065 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.065 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.065 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.066 --rc genhtml_branch_coverage=1 00:16:26.066 --rc genhtml_function_coverage=1 00:16:26.066 --rc genhtml_legend=1 00:16:26.066 --rc geninfo_all_blocks=1 00:16:26.066 --rc geninfo_unexecuted_blocks=1 00:16:26.066 00:16:26.066 ' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.066 --rc genhtml_branch_coverage=1 00:16:26.066 --rc genhtml_function_coverage=1 00:16:26.066 --rc genhtml_legend=1 00:16:26.066 --rc geninfo_all_blocks=1 00:16:26.066 --rc geninfo_unexecuted_blocks=1 00:16:26.066 00:16:26.066 ' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.066 --rc genhtml_branch_coverage=1 00:16:26.066 --rc genhtml_function_coverage=1 00:16:26.066 --rc genhtml_legend=1 00:16:26.066 --rc geninfo_all_blocks=1 00:16:26.066 --rc geninfo_unexecuted_blocks=1 00:16:26.066 00:16:26.066 ' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.066 --rc genhtml_branch_coverage=1 00:16:26.066 --rc genhtml_function_coverage=1 00:16:26.066 --rc genhtml_legend=1 00:16:26.066 --rc geninfo_all_blocks=1 00:16:26.066 --rc geninfo_unexecuted_blocks=1 00:16:26.066 00:16:26.066 ' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:26.066 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:26.067 Cannot find device "nvmf_init_br" 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:26.067 Cannot find device "nvmf_init_br2" 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:26.067 Cannot find device "nvmf_tgt_br" 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.067 Cannot find device "nvmf_tgt_br2" 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:26.067 Cannot find device "nvmf_init_br" 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:16:26.067 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:26.326 Cannot find device "nvmf_init_br2" 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:26.326 Cannot find device "nvmf_tgt_br" 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:26.326 Cannot find device "nvmf_tgt_br2" 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:26.326 Cannot find device "nvmf_br" 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:26.326 Cannot find device "nvmf_init_if" 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:26.326 Cannot find device "nvmf_init_if2" 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.326 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:26.326 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:26.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:16:26.586 00:16:26.586 --- 10.0.0.3 ping statistics --- 00:16:26.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.586 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:26.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:26.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:16:26.586 00:16:26.586 --- 10.0.0.4 ping statistics --- 00:16:26.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.586 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:26.586 00:16:26.586 --- 10.0.0.1 ping statistics --- 00:16:26.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.586 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:26.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:26.586 00:16:26.586 --- 10.0.0.2 ping statistics --- 00:16:26.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.586 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # return 0 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=90722 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 90722 00:16:26.586 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 90722 ']' 00:16:26.587 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.587 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.587 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.587 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.587 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:26.587 [2024-11-20 17:37:08.320909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:26.587 [2024-11-20 17:37:08.321319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.845 [2024-11-20 17:37:08.463841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.846 [2024-11-20 17:37:08.606126] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.846 [2024-11-20 17:37:08.606595] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.846 [2024-11-20 17:37:08.606631] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.846 [2024-11-20 17:37:08.606644] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.846 [2024-11-20 17:37:08.606653] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.846 [2024-11-20 17:37:08.606896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.846 [2024-11-20 17:37:08.607011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.846 [2024-11-20 17:37:08.607174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.846 [2024-11-20 17:37:08.607185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:27.781 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27208 00:16:28.039 [2024-11-20 17:37:09.728206] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:28.039 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/20 17:37:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27208 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:28.039 request: 00:16:28.039 { 00:16:28.039 "method": "nvmf_create_subsystem", 00:16:28.039 "params": { 00:16:28.039 "nqn": "nqn.2016-06.io.spdk:cnode27208", 00:16:28.039 "tgt_name": "foobar" 00:16:28.039 } 00:16:28.039 } 00:16:28.039 Got JSON-RPC error response 00:16:28.039 GoRPCClient: error on JSON-RPC call' 00:16:28.039 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/20 17:37:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27208 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:28.039 request: 00:16:28.039 { 00:16:28.039 "method": "nvmf_create_subsystem", 00:16:28.039 "params": { 00:16:28.039 "nqn": "nqn.2016-06.io.spdk:cnode27208", 00:16:28.039 "tgt_name": "foobar" 00:16:28.039 } 00:16:28.039 } 00:16:28.039 Got JSON-RPC error response 00:16:28.039 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:28.039 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:28.039 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26388 00:16:28.297 [2024-11-20 17:37:10.024501] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26388: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:28.297 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/20 17:37:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26388 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:28.297 request: 00:16:28.297 { 00:16:28.297 "method": "nvmf_create_subsystem", 00:16:28.297 "params": { 00:16:28.297 "nqn": "nqn.2016-06.io.spdk:cnode26388", 00:16:28.297 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:28.297 } 00:16:28.297 } 00:16:28.297 Got JSON-RPC error response 00:16:28.297 GoRPCClient: error on JSON-RPC call' 00:16:28.297 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/20 17:37:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26388 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:28.297 request: 00:16:28.297 { 00:16:28.297 "method": "nvmf_create_subsystem", 00:16:28.297 "params": { 00:16:28.297 "nqn": "nqn.2016-06.io.spdk:cnode26388", 00:16:28.297 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:28.297 } 00:16:28.297 } 00:16:28.297 Got JSON-RPC error response 00:16:28.297 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:28.297 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:28.297 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24824 00:16:28.866 [2024-11-20 17:37:10.360786] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24824: invalid model number 'SPDK_Controller' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/20 17:37:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode24824], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:28.866 request: 00:16:28.866 { 00:16:28.866 "method": "nvmf_create_subsystem", 00:16:28.866 "params": { 00:16:28.866 "nqn": "nqn.2016-06.io.spdk:cnode24824", 00:16:28.866 "model_number": "SPDK_Controller\u001f" 00:16:28.866 } 00:16:28.866 } 00:16:28.866 Got JSON-RPC error response 00:16:28.866 GoRPCClient: error on JSON-RPC call' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/20 17:37:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode24824], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:28.866 request: 00:16:28.866 { 00:16:28.866 "method": "nvmf_create_subsystem", 00:16:28.866 "params": { 00:16:28.866 "nqn": "nqn.2016-06.io.spdk:cnode24824", 00:16:28.866 "model_number": "SPDK_Controller\u001f" 00:16:28.866 } 00:16:28.866 } 00:16:28.866 Got JSON-RPC error response 00:16:28.866 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.866 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=,iWj =ia*HVbb=+#OD"' 00:16:28.867 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '=,iWj =ia*HVbb=+#OD"' nqn.2016-06.io.spdk:cnode27193 00:16:29.126 [2024-11-20 17:37:10.753205] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27193: invalid serial number '=,iWj =ia*HVbb=+#OD"' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/20 17:37:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27193 serial_number:=,iWj =ia*HVbb=+#OD"], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN =,iWj =ia*HVbb=+#OD" 00:16:29.126 request: 00:16:29.126 { 00:16:29.126 "method": "nvmf_create_subsystem", 00:16:29.126 "params": { 00:16:29.126 "nqn": "nqn.2016-06.io.spdk:cnode27193", 00:16:29.126 "serial_number": "=,iWj =ia*HVbb\u007f=+#OD\"" 00:16:29.126 } 00:16:29.126 } 00:16:29.126 Got JSON-RPC error response 00:16:29.126 GoRPCClient: error on JSON-RPC call' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/20 17:37:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27193 serial_number:=,iWj =ia*HVbb=+#OD"], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN =,iWj =ia*HVbb=+#OD" 00:16:29.126 request: 00:16:29.126 { 00:16:29.126 "method": "nvmf_create_subsystem", 00:16:29.126 "params": { 00:16:29.126 "nqn": "nqn.2016-06.io.spdk:cnode27193", 00:16:29.126 "serial_number": "=,iWj =ia*HVbb\u007f=+#OD\"" 00:16:29.126 } 00:16:29.126 } 00:16:29.126 Got JSON-RPC error response 00:16:29.126 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:29.126 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.127 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:29.387 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.387 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.388 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:16:29.388 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '%RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({' 00:16:29.388 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '%RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({' nqn.2016-06.io.spdk:cnode12273 00:16:29.649 [2024-11-20 17:37:11.321649] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12273: invalid model number '%RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({' 00:16:29.649 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/20 17:37:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:%RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({ nqn:nqn.2016-06.io.spdk:cnode12273], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN %RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({ 00:16:29.649 request: 00:16:29.650 { 00:16:29.650 "method": "nvmf_create_subsystem", 00:16:29.650 "params": { 00:16:29.650 "nqn": "nqn.2016-06.io.spdk:cnode12273", 00:16:29.650 "model_number": "%RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({" 00:16:29.650 } 00:16:29.650 } 00:16:29.650 Got JSON-RPC error response 00:16:29.650 GoRPCClient: error on JSON-RPC call' 00:16:29.650 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/20 17:37:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:%RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({ nqn:nqn.2016-06.io.spdk:cnode12273], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN %RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({ 00:16:29.650 request: 00:16:29.650 { 00:16:29.650 "method": "nvmf_create_subsystem", 00:16:29.650 "params": { 00:16:29.650 "nqn": "nqn.2016-06.io.spdk:cnode12273", 00:16:29.650 "model_number": "%RSe)P{;T]-u@B=<1S$>E~PX#1xY{0?B:n7~5, ({" 00:16:29.650 } 00:16:29.650 } 00:16:29.650 Got JSON-RPC error response 00:16:29.650 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:29.650 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:29.913 [2024-11-20 17:37:11.593941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.913 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:30.171 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:30.171 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:30.171 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:30.171 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:30.171 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:30.738 [2024-11-20 17:37:12.227051] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:30.738 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/20 17:37:12 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:30.738 request: 00:16:30.738 { 00:16:30.738 "method": "nvmf_subsystem_remove_listener", 00:16:30.738 "params": { 00:16:30.738 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:30.739 "listen_address": { 00:16:30.739 "trtype": "tcp", 00:16:30.739 "traddr": "", 00:16:30.739 "trsvcid": "4421" 00:16:30.739 } 00:16:30.739 } 00:16:30.739 } 00:16:30.739 Got JSON-RPC error response 00:16:30.739 GoRPCClient: error on JSON-RPC call' 00:16:30.739 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/20 17:37:12 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:30.739 request: 00:16:30.739 { 00:16:30.739 "method": "nvmf_subsystem_remove_listener", 00:16:30.739 "params": { 00:16:30.739 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:30.739 "listen_address": { 00:16:30.739 "trtype": "tcp", 00:16:30.739 "traddr": "", 00:16:30.739 "trsvcid": "4421" 00:16:30.739 } 00:16:30.739 } 00:16:30.739 } 00:16:30.739 Got JSON-RPC error response 00:16:30.739 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:30.739 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9460 -i 0 00:16:30.739 [2024-11-20 17:37:12.509587] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9460: invalid cntlid range [0-65519] 00:16:30.997 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/20 17:37:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9460], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:30.997 request: 00:16:30.997 { 00:16:30.997 "method": "nvmf_create_subsystem", 00:16:30.997 "params": { 00:16:30.997 "nqn": "nqn.2016-06.io.spdk:cnode9460", 00:16:30.997 "min_cntlid": 0 00:16:30.997 } 00:16:30.997 } 00:16:30.997 Got JSON-RPC error response 00:16:30.997 GoRPCClient: error on JSON-RPC call' 00:16:30.997 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/20 17:37:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9460], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:30.997 request: 00:16:30.997 { 00:16:30.997 "method": "nvmf_create_subsystem", 00:16:30.997 "params": { 00:16:30.997 "nqn": "nqn.2016-06.io.spdk:cnode9460", 00:16:30.997 "min_cntlid": 0 00:16:30.997 } 00:16:30.997 } 00:16:30.997 Got JSON-RPC error response 00:16:30.997 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:30.997 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18842 -i 65520 00:16:31.257 [2024-11-20 17:37:12.847794] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18842: invalid cntlid range [65520-65519] 00:16:31.257 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/20 17:37:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18842], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:31.257 request: 00:16:31.257 { 00:16:31.257 "method": "nvmf_create_subsystem", 00:16:31.257 "params": { 00:16:31.257 "nqn": "nqn.2016-06.io.spdk:cnode18842", 00:16:31.257 "min_cntlid": 65520 00:16:31.257 } 00:16:31.257 } 00:16:31.257 Got JSON-RPC error response 00:16:31.257 GoRPCClient: error on JSON-RPC call' 00:16:31.257 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/20 17:37:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18842], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:31.257 request: 00:16:31.257 { 00:16:31.257 "method": "nvmf_create_subsystem", 00:16:31.257 "params": { 00:16:31.257 "nqn": "nqn.2016-06.io.spdk:cnode18842", 00:16:31.257 "min_cntlid": 65520 00:16:31.257 } 00:16:31.257 } 00:16:31.257 Got JSON-RPC error response 00:16:31.257 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.257 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15977 -I 0 00:16:31.517 [2024-11-20 17:37:13.128040] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15977: invalid cntlid range [1-0] 00:16:31.517 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/20 17:37:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode15977], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:31.517 request: 00:16:31.517 { 00:16:31.517 "method": "nvmf_create_subsystem", 00:16:31.517 "params": { 00:16:31.517 "nqn": "nqn.2016-06.io.spdk:cnode15977", 00:16:31.517 "max_cntlid": 0 00:16:31.517 } 00:16:31.517 } 00:16:31.517 Got JSON-RPC error response 00:16:31.517 GoRPCClient: error on JSON-RPC call' 00:16:31.517 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/20 17:37:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode15977], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:31.517 request: 00:16:31.517 { 00:16:31.517 "method": "nvmf_create_subsystem", 00:16:31.517 "params": { 00:16:31.517 "nqn": "nqn.2016-06.io.spdk:cnode15977", 00:16:31.517 "max_cntlid": 0 00:16:31.517 } 00:16:31.517 } 00:16:31.517 Got JSON-RPC error response 00:16:31.517 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.517 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12361 -I 65520 00:16:31.776 [2024-11-20 17:37:13.440403] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12361: invalid cntlid range [1-65520] 00:16:31.776 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/20 17:37:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12361], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:31.776 request: 00:16:31.776 { 00:16:31.776 "method": "nvmf_create_subsystem", 00:16:31.776 "params": { 00:16:31.776 "nqn": "nqn.2016-06.io.spdk:cnode12361", 00:16:31.776 "max_cntlid": 65520 00:16:31.776 } 00:16:31.776 } 00:16:31.776 Got JSON-RPC error response 00:16:31.776 GoRPCClient: error on JSON-RPC call' 00:16:31.776 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/20 17:37:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12361], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:31.776 request: 00:16:31.776 { 00:16:31.776 "method": "nvmf_create_subsystem", 00:16:31.776 "params": { 00:16:31.776 "nqn": "nqn.2016-06.io.spdk:cnode12361", 00:16:31.776 "max_cntlid": 65520 00:16:31.776 } 00:16:31.776 } 00:16:31.776 Got JSON-RPC error response 00:16:31.776 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.776 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26805 -i 6 -I 5 00:16:32.034 [2024-11-20 17:37:13.712717] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26805: invalid cntlid range [6-5] 00:16:32.034 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/20 17:37:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode26805], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:32.034 request: 00:16:32.034 { 00:16:32.034 "method": "nvmf_create_subsystem", 00:16:32.034 "params": { 00:16:32.034 "nqn": "nqn.2016-06.io.spdk:cnode26805", 00:16:32.034 "min_cntlid": 6, 00:16:32.034 "max_cntlid": 5 00:16:32.034 } 00:16:32.034 } 00:16:32.034 Got JSON-RPC error response 00:16:32.034 GoRPCClient: error on JSON-RPC call' 00:16:32.034 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/20 17:37:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode26805], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:32.034 request: 00:16:32.034 { 00:16:32.034 "method": "nvmf_create_subsystem", 00:16:32.034 "params": { 00:16:32.034 "nqn": "nqn.2016-06.io.spdk:cnode26805", 00:16:32.034 "min_cntlid": 6, 00:16:32.034 "max_cntlid": 5 00:16:32.034 } 00:16:32.034 } 00:16:32.034 Got JSON-RPC error response 00:16:32.034 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.034 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:32.292 { 00:16:32.292 "name": "foobar", 00:16:32.292 "method": "nvmf_delete_target", 00:16:32.292 "req_id": 1 00:16:32.292 } 00:16:32.292 Got JSON-RPC error response 00:16:32.292 response: 00:16:32.292 { 00:16:32.292 "code": -32602, 00:16:32.292 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:32.292 }' 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:32.292 { 00:16:32.292 "name": "foobar", 00:16:32.292 "method": "nvmf_delete_target", 00:16:32.292 "req_id": 1 00:16:32.292 } 00:16:32.292 Got JSON-RPC error response 00:16:32.292 response: 00:16:32.292 { 00:16:32.292 "code": -32602, 00:16:32.292 "message": "The specified target doesn't exist, cannot delete it." 00:16:32.292 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.292 rmmod nvme_tcp 00:16:32.292 rmmod nvme_fabrics 00:16:32.292 rmmod nvme_keyring 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 90722 ']' 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 90722 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 90722 ']' 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 90722 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90722 00:16:32.292 killing process with pid 90722 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90722' 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 90722 00:16:32.292 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 90722 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:32.550 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:16:32.808 ************************************ 00:16:32.808 END TEST nvmf_invalid 00:16:32.808 ************************************ 00:16:32.808 00:16:32.808 real 0m7.046s 00:16:32.808 user 0m26.357s 00:16:32.808 sys 0m1.759s 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 ************************************ 00:16:32.808 START TEST nvmf_connect_stress 00:16:32.808 ************************************ 00:16:32.808 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:33.068 * Looking for test storage... 00:16:33.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.068 --rc genhtml_branch_coverage=1 00:16:33.068 --rc genhtml_function_coverage=1 00:16:33.068 --rc genhtml_legend=1 00:16:33.068 --rc geninfo_all_blocks=1 00:16:33.068 --rc geninfo_unexecuted_blocks=1 00:16:33.068 00:16:33.068 ' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.068 --rc genhtml_branch_coverage=1 00:16:33.068 --rc genhtml_function_coverage=1 00:16:33.068 --rc genhtml_legend=1 00:16:33.068 --rc geninfo_all_blocks=1 00:16:33.068 --rc geninfo_unexecuted_blocks=1 00:16:33.068 00:16:33.068 ' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.068 --rc genhtml_branch_coverage=1 00:16:33.068 --rc genhtml_function_coverage=1 00:16:33.068 --rc genhtml_legend=1 00:16:33.068 --rc geninfo_all_blocks=1 00:16:33.068 --rc geninfo_unexecuted_blocks=1 00:16:33.068 00:16:33.068 ' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:33.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.068 --rc genhtml_branch_coverage=1 00:16:33.068 --rc genhtml_function_coverage=1 00:16:33.068 --rc genhtml_legend=1 00:16:33.068 --rc geninfo_all_blocks=1 00:16:33.068 --rc geninfo_unexecuted_blocks=1 00:16:33.068 00:16:33.068 ' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.068 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:33.069 Cannot find device "nvmf_init_br" 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:33.069 Cannot find device "nvmf_init_br2" 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:16:33.069 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:33.328 Cannot find device "nvmf_tgt_br" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.328 Cannot find device "nvmf_tgt_br2" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:33.328 Cannot find device "nvmf_init_br" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:33.328 Cannot find device "nvmf_init_br2" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:33.328 Cannot find device "nvmf_tgt_br" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:33.328 Cannot find device "nvmf_tgt_br2" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:33.328 Cannot find device "nvmf_br" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:33.328 Cannot find device "nvmf_init_if" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:33.328 Cannot find device "nvmf_init_if2" 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:16:33.328 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.329 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:16:33.329 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.329 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.329 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:33.329 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.329 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.329 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:33.329 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:33.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:16:33.588 00:16:33.588 --- 10.0.0.3 ping statistics --- 00:16:33.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.588 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:33.588 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:33.588 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:33.588 00:16:33.588 --- 10.0.0.4 ping statistics --- 00:16:33.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.588 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:16:33.588 00:16:33.588 --- 10.0.0.1 ping statistics --- 00:16:33.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.588 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:33.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:33.588 00:16:33.588 --- 10.0.0.2 ping statistics --- 00:16:33.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.588 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # return 0 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=91290 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 91290 00:16:33.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 91290 ']' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.588 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.588 [2024-11-20 17:37:15.305583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:33.588 [2024-11-20 17:37:15.305943] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.847 [2024-11-20 17:37:15.455332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.847 [2024-11-20 17:37:15.594191] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.847 [2024-11-20 17:37:15.594600] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.847 [2024-11-20 17:37:15.594735] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.847 [2024-11-20 17:37:15.594909] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.847 [2024-11-20 17:37:15.594947] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.847 [2024-11-20 17:37:15.595210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.847 [2024-11-20 17:37:15.595341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.847 [2024-11-20 17:37:15.595336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.783 [2024-11-20 17:37:16.424753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.783 [2024-11-20 17:37:16.461551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.783 NULL1 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=91343 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.783 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.350 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.350 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:35.350 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.350 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.350 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.608 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.609 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:35.609 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.609 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.609 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.871 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.871 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:35.871 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.871 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.871 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.129 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.129 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:36.129 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.129 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.129 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.695 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.695 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:36.695 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.695 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.695 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.953 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.953 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:36.953 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.953 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.953 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.211 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.211 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:37.211 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.211 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.211 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.468 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.468 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:37.468 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.468 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.468 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.737 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.737 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:37.737 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.737 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.737 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.318 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.318 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:38.318 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.318 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.318 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.575 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.575 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:38.575 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.575 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.575 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.834 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.834 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:38.834 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.834 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.834 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.093 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.093 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:39.093 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.093 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.093 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.352 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.352 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:39.352 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.352 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.352 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.918 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.918 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:39.918 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.918 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.918 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.176 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.176 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:40.176 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.176 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.176 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.434 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.434 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:40.434 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.434 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.434 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.691 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.691 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:40.691 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.691 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.691 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.948 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.948 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:40.948 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.948 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.948 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.514 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.514 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:41.514 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.514 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.514 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.772 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.772 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:41.772 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.772 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.772 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.031 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.031 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:42.031 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.031 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.031 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.289 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.290 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:42.290 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.290 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.290 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.857 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.857 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:42.857 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.857 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.857 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.117 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.117 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:43.117 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.117 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.117 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.376 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.376 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:43.376 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.376 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.376 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.635 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.635 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:43.635 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.635 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.635 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.892 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.892 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:43.892 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.892 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.892 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.459 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.459 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:44.459 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.459 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.459 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.717 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.717 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:44.717 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.717 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.717 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.975 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.975 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:44.975 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.975 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.975 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.975 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91343 00:16:45.232 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (91343) - No such process 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 91343 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:45.232 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:45.232 rmmod nvme_tcp 00:16:45.490 rmmod nvme_fabrics 00:16:45.490 rmmod nvme_keyring 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 91290 ']' 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 91290 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 91290 ']' 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 91290 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91290 00:16:45.490 killing process with pid 91290 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91290' 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 91290 00:16:45.490 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 91290 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:45.749 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:16:46.007 00:16:46.007 real 0m13.082s 00:16:46.007 user 0m41.902s 00:16:46.007 sys 0m3.671s 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 ************************************ 00:16:46.007 END TEST nvmf_connect_stress 00:16:46.007 ************************************ 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 ************************************ 00:16:46.007 START TEST nvmf_fused_ordering 00:16:46.007 ************************************ 00:16:46.007 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:46.267 * Looking for test storage... 00:16:46.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:46.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.267 --rc genhtml_branch_coverage=1 00:16:46.267 --rc genhtml_function_coverage=1 00:16:46.267 --rc genhtml_legend=1 00:16:46.267 --rc geninfo_all_blocks=1 00:16:46.267 --rc geninfo_unexecuted_blocks=1 00:16:46.267 00:16:46.267 ' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:46.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.267 --rc genhtml_branch_coverage=1 00:16:46.267 --rc genhtml_function_coverage=1 00:16:46.267 --rc genhtml_legend=1 00:16:46.267 --rc geninfo_all_blocks=1 00:16:46.267 --rc geninfo_unexecuted_blocks=1 00:16:46.267 00:16:46.267 ' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:46.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.267 --rc genhtml_branch_coverage=1 00:16:46.267 --rc genhtml_function_coverage=1 00:16:46.267 --rc genhtml_legend=1 00:16:46.267 --rc geninfo_all_blocks=1 00:16:46.267 --rc geninfo_unexecuted_blocks=1 00:16:46.267 00:16:46.267 ' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:46.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.267 --rc genhtml_branch_coverage=1 00:16:46.267 --rc genhtml_function_coverage=1 00:16:46.267 --rc genhtml_legend=1 00:16:46.267 --rc geninfo_all_blocks=1 00:16:46.267 --rc geninfo_unexecuted_blocks=1 00:16:46.267 00:16:46.267 ' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.267 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.268 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:46.268 Cannot find device "nvmf_init_br" 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:46.268 Cannot find device "nvmf_init_br2" 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:16:46.268 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:46.268 Cannot find device "nvmf_tgt_br" 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.268 Cannot find device "nvmf_tgt_br2" 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:46.268 Cannot find device "nvmf_init_br" 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:46.268 Cannot find device "nvmf_init_br2" 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:16:46.268 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:46.527 Cannot find device "nvmf_tgt_br" 00:16:46.527 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:16:46.527 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:46.527 Cannot find device "nvmf_tgt_br2" 00:16:46.527 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:16:46.527 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:46.527 Cannot find device "nvmf_br" 00:16:46.527 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:46.528 Cannot find device "nvmf_init_if" 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:46.528 Cannot find device "nvmf_init_if2" 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.528 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.786 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:46.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:16:46.787 00:16:46.787 --- 10.0.0.3 ping statistics --- 00:16:46.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.787 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:46.787 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:46.787 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:16:46.787 00:16:46.787 --- 10.0.0.4 ping statistics --- 00:16:46.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.787 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:46.787 00:16:46.787 --- 10.0.0.1 ping statistics --- 00:16:46.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.787 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:46.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:46.787 00:16:46.787 --- 10.0.0.2 ping statistics --- 00:16:46.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.787 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # return 0 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=91732 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 91732 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 91732 ']' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.787 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:46.787 [2024-11-20 17:37:28.469407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:46.787 [2024-11-20 17:37:28.469534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.045 [2024-11-20 17:37:28.612349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.045 [2024-11-20 17:37:28.745301] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.045 [2024-11-20 17:37:28.745393] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.045 [2024-11-20 17:37:28.745403] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.045 [2024-11-20 17:37:28.745412] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.045 [2024-11-20 17:37:28.745419] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.045 [2024-11-20 17:37:28.745447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.993 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:47.994 [2024-11-20 17:37:29.660672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:47.994 [2024-11-20 17:37:29.680888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:47.994 NULL1 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.994 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:47.994 [2024-11-20 17:37:29.737255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:47.994 [2024-11-20 17:37:29.737312] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91782 ] 00:16:48.560 Attached to nqn.2016-06.io.spdk:cnode1 00:16:48.560 Namespace ID: 1 size: 1GB 00:16:48.560 fused_ordering(0) 00:16:48.560 fused_ordering(1) 00:16:48.560 fused_ordering(2) 00:16:48.560 fused_ordering(3) 00:16:48.560 fused_ordering(4) 00:16:48.560 fused_ordering(5) 00:16:48.560 fused_ordering(6) 00:16:48.560 fused_ordering(7) 00:16:48.560 fused_ordering(8) 00:16:48.560 fused_ordering(9) 00:16:48.560 fused_ordering(10) 00:16:48.560 fused_ordering(11) 00:16:48.560 fused_ordering(12) 00:16:48.560 fused_ordering(13) 00:16:48.560 fused_ordering(14) 00:16:48.560 fused_ordering(15) 00:16:48.560 fused_ordering(16) 00:16:48.560 fused_ordering(17) 00:16:48.560 fused_ordering(18) 00:16:48.560 fused_ordering(19) 00:16:48.560 fused_ordering(20) 00:16:48.560 fused_ordering(21) 00:16:48.560 fused_ordering(22) 00:16:48.560 fused_ordering(23) 00:16:48.560 fused_ordering(24) 00:16:48.560 fused_ordering(25) 00:16:48.560 fused_ordering(26) 00:16:48.560 fused_ordering(27) 00:16:48.560 fused_ordering(28) 00:16:48.560 fused_ordering(29) 00:16:48.560 fused_ordering(30) 00:16:48.560 fused_ordering(31) 00:16:48.560 fused_ordering(32) 00:16:48.560 fused_ordering(33) 00:16:48.560 fused_ordering(34) 00:16:48.560 fused_ordering(35) 00:16:48.560 fused_ordering(36) 00:16:48.560 fused_ordering(37) 00:16:48.560 fused_ordering(38) 00:16:48.560 fused_ordering(39) 00:16:48.560 fused_ordering(40) 00:16:48.560 fused_ordering(41) 00:16:48.560 fused_ordering(42) 00:16:48.560 fused_ordering(43) 00:16:48.560 fused_ordering(44) 00:16:48.560 fused_ordering(45) 00:16:48.560 fused_ordering(46) 00:16:48.560 fused_ordering(47) 00:16:48.560 fused_ordering(48) 00:16:48.560 fused_ordering(49) 00:16:48.560 fused_ordering(50) 00:16:48.560 fused_ordering(51) 00:16:48.560 fused_ordering(52) 00:16:48.560 fused_ordering(53) 00:16:48.560 fused_ordering(54) 00:16:48.560 fused_ordering(55) 00:16:48.560 fused_ordering(56) 00:16:48.560 fused_ordering(57) 00:16:48.560 fused_ordering(58) 00:16:48.560 fused_ordering(59) 00:16:48.560 fused_ordering(60) 00:16:48.560 fused_ordering(61) 00:16:48.560 fused_ordering(62) 00:16:48.560 fused_ordering(63) 00:16:48.560 fused_ordering(64) 00:16:48.560 fused_ordering(65) 00:16:48.560 fused_ordering(66) 00:16:48.560 fused_ordering(67) 00:16:48.560 fused_ordering(68) 00:16:48.560 fused_ordering(69) 00:16:48.560 fused_ordering(70) 00:16:48.560 fused_ordering(71) 00:16:48.560 fused_ordering(72) 00:16:48.560 fused_ordering(73) 00:16:48.560 fused_ordering(74) 00:16:48.560 fused_ordering(75) 00:16:48.560 fused_ordering(76) 00:16:48.560 fused_ordering(77) 00:16:48.560 fused_ordering(78) 00:16:48.560 fused_ordering(79) 00:16:48.560 fused_ordering(80) 00:16:48.560 fused_ordering(81) 00:16:48.560 fused_ordering(82) 00:16:48.560 fused_ordering(83) 00:16:48.560 fused_ordering(84) 00:16:48.560 fused_ordering(85) 00:16:48.560 fused_ordering(86) 00:16:48.560 fused_ordering(87) 00:16:48.560 fused_ordering(88) 00:16:48.560 fused_ordering(89) 00:16:48.560 fused_ordering(90) 00:16:48.560 fused_ordering(91) 00:16:48.560 fused_ordering(92) 00:16:48.560 fused_ordering(93) 00:16:48.560 fused_ordering(94) 00:16:48.560 fused_ordering(95) 00:16:48.560 fused_ordering(96) 00:16:48.560 fused_ordering(97) 00:16:48.560 fused_ordering(98) 00:16:48.560 fused_ordering(99) 00:16:48.560 fused_ordering(100) 00:16:48.560 fused_ordering(101) 00:16:48.560 fused_ordering(102) 00:16:48.560 fused_ordering(103) 00:16:48.560 fused_ordering(104) 00:16:48.560 fused_ordering(105) 00:16:48.560 fused_ordering(106) 00:16:48.560 fused_ordering(107) 00:16:48.560 fused_ordering(108) 00:16:48.560 fused_ordering(109) 00:16:48.560 fused_ordering(110) 00:16:48.560 fused_ordering(111) 00:16:48.560 fused_ordering(112) 00:16:48.560 fused_ordering(113) 00:16:48.560 fused_ordering(114) 00:16:48.560 fused_ordering(115) 00:16:48.560 fused_ordering(116) 00:16:48.560 fused_ordering(117) 00:16:48.560 fused_ordering(118) 00:16:48.560 fused_ordering(119) 00:16:48.560 fused_ordering(120) 00:16:48.560 fused_ordering(121) 00:16:48.560 fused_ordering(122) 00:16:48.560 fused_ordering(123) 00:16:48.560 fused_ordering(124) 00:16:48.560 fused_ordering(125) 00:16:48.560 fused_ordering(126) 00:16:48.560 fused_ordering(127) 00:16:48.560 fused_ordering(128) 00:16:48.560 fused_ordering(129) 00:16:48.560 fused_ordering(130) 00:16:48.560 fused_ordering(131) 00:16:48.560 fused_ordering(132) 00:16:48.560 fused_ordering(133) 00:16:48.560 fused_ordering(134) 00:16:48.560 fused_ordering(135) 00:16:48.560 fused_ordering(136) 00:16:48.560 fused_ordering(137) 00:16:48.560 fused_ordering(138) 00:16:48.560 fused_ordering(139) 00:16:48.560 fused_ordering(140) 00:16:48.560 fused_ordering(141) 00:16:48.560 fused_ordering(142) 00:16:48.560 fused_ordering(143) 00:16:48.560 fused_ordering(144) 00:16:48.560 fused_ordering(145) 00:16:48.560 fused_ordering(146) 00:16:48.560 fused_ordering(147) 00:16:48.560 fused_ordering(148) 00:16:48.560 fused_ordering(149) 00:16:48.560 fused_ordering(150) 00:16:48.560 fused_ordering(151) 00:16:48.560 fused_ordering(152) 00:16:48.560 fused_ordering(153) 00:16:48.560 fused_ordering(154) 00:16:48.560 fused_ordering(155) 00:16:48.560 fused_ordering(156) 00:16:48.560 fused_ordering(157) 00:16:48.560 fused_ordering(158) 00:16:48.560 fused_ordering(159) 00:16:48.560 fused_ordering(160) 00:16:48.560 fused_ordering(161) 00:16:48.560 fused_ordering(162) 00:16:48.560 fused_ordering(163) 00:16:48.560 fused_ordering(164) 00:16:48.560 fused_ordering(165) 00:16:48.560 fused_ordering(166) 00:16:48.560 fused_ordering(167) 00:16:48.560 fused_ordering(168) 00:16:48.560 fused_ordering(169) 00:16:48.560 fused_ordering(170) 00:16:48.560 fused_ordering(171) 00:16:48.560 fused_ordering(172) 00:16:48.560 fused_ordering(173) 00:16:48.560 fused_ordering(174) 00:16:48.560 fused_ordering(175) 00:16:48.560 fused_ordering(176) 00:16:48.560 fused_ordering(177) 00:16:48.560 fused_ordering(178) 00:16:48.560 fused_ordering(179) 00:16:48.560 fused_ordering(180) 00:16:48.560 fused_ordering(181) 00:16:48.560 fused_ordering(182) 00:16:48.560 fused_ordering(183) 00:16:48.560 fused_ordering(184) 00:16:48.560 fused_ordering(185) 00:16:48.560 fused_ordering(186) 00:16:48.560 fused_ordering(187) 00:16:48.560 fused_ordering(188) 00:16:48.560 fused_ordering(189) 00:16:48.560 fused_ordering(190) 00:16:48.560 fused_ordering(191) 00:16:48.560 fused_ordering(192) 00:16:48.560 fused_ordering(193) 00:16:48.560 fused_ordering(194) 00:16:48.560 fused_ordering(195) 00:16:48.560 fused_ordering(196) 00:16:48.560 fused_ordering(197) 00:16:48.560 fused_ordering(198) 00:16:48.560 fused_ordering(199) 00:16:48.560 fused_ordering(200) 00:16:48.560 fused_ordering(201) 00:16:48.560 fused_ordering(202) 00:16:48.560 fused_ordering(203) 00:16:48.560 fused_ordering(204) 00:16:48.560 fused_ordering(205) 00:16:48.819 fused_ordering(206) 00:16:48.819 fused_ordering(207) 00:16:48.819 fused_ordering(208) 00:16:48.819 fused_ordering(209) 00:16:48.819 fused_ordering(210) 00:16:48.819 fused_ordering(211) 00:16:48.819 fused_ordering(212) 00:16:48.819 fused_ordering(213) 00:16:48.819 fused_ordering(214) 00:16:48.819 fused_ordering(215) 00:16:48.819 fused_ordering(216) 00:16:48.819 fused_ordering(217) 00:16:48.819 fused_ordering(218) 00:16:48.819 fused_ordering(219) 00:16:48.819 fused_ordering(220) 00:16:48.819 fused_ordering(221) 00:16:48.819 fused_ordering(222) 00:16:48.819 fused_ordering(223) 00:16:48.819 fused_ordering(224) 00:16:48.819 fused_ordering(225) 00:16:48.819 fused_ordering(226) 00:16:48.819 fused_ordering(227) 00:16:48.819 fused_ordering(228) 00:16:48.819 fused_ordering(229) 00:16:48.819 fused_ordering(230) 00:16:48.819 fused_ordering(231) 00:16:48.819 fused_ordering(232) 00:16:48.819 fused_ordering(233) 00:16:48.819 fused_ordering(234) 00:16:48.819 fused_ordering(235) 00:16:48.819 fused_ordering(236) 00:16:48.819 fused_ordering(237) 00:16:48.819 fused_ordering(238) 00:16:48.819 fused_ordering(239) 00:16:48.819 fused_ordering(240) 00:16:48.819 fused_ordering(241) 00:16:48.819 fused_ordering(242) 00:16:48.819 fused_ordering(243) 00:16:48.819 fused_ordering(244) 00:16:48.819 fused_ordering(245) 00:16:48.819 fused_ordering(246) 00:16:48.819 fused_ordering(247) 00:16:48.819 fused_ordering(248) 00:16:48.819 fused_ordering(249) 00:16:48.819 fused_ordering(250) 00:16:48.819 fused_ordering(251) 00:16:48.819 fused_ordering(252) 00:16:48.819 fused_ordering(253) 00:16:48.819 fused_ordering(254) 00:16:48.819 fused_ordering(255) 00:16:48.819 fused_ordering(256) 00:16:48.819 fused_ordering(257) 00:16:48.819 fused_ordering(258) 00:16:48.819 fused_ordering(259) 00:16:48.819 fused_ordering(260) 00:16:48.819 fused_ordering(261) 00:16:48.819 fused_ordering(262) 00:16:48.819 fused_ordering(263) 00:16:48.819 fused_ordering(264) 00:16:48.819 fused_ordering(265) 00:16:48.819 fused_ordering(266) 00:16:48.819 fused_ordering(267) 00:16:48.819 fused_ordering(268) 00:16:48.819 fused_ordering(269) 00:16:48.819 fused_ordering(270) 00:16:48.819 fused_ordering(271) 00:16:48.819 fused_ordering(272) 00:16:48.819 fused_ordering(273) 00:16:48.819 fused_ordering(274) 00:16:48.819 fused_ordering(275) 00:16:48.819 fused_ordering(276) 00:16:48.819 fused_ordering(277) 00:16:48.819 fused_ordering(278) 00:16:48.819 fused_ordering(279) 00:16:48.819 fused_ordering(280) 00:16:48.819 fused_ordering(281) 00:16:48.819 fused_ordering(282) 00:16:48.819 fused_ordering(283) 00:16:48.819 fused_ordering(284) 00:16:48.819 fused_ordering(285) 00:16:48.819 fused_ordering(286) 00:16:48.819 fused_ordering(287) 00:16:48.819 fused_ordering(288) 00:16:48.819 fused_ordering(289) 00:16:48.819 fused_ordering(290) 00:16:48.819 fused_ordering(291) 00:16:48.819 fused_ordering(292) 00:16:48.819 fused_ordering(293) 00:16:48.819 fused_ordering(294) 00:16:48.819 fused_ordering(295) 00:16:48.819 fused_ordering(296) 00:16:48.819 fused_ordering(297) 00:16:48.819 fused_ordering(298) 00:16:48.819 fused_ordering(299) 00:16:48.819 fused_ordering(300) 00:16:48.819 fused_ordering(301) 00:16:48.819 fused_ordering(302) 00:16:48.819 fused_ordering(303) 00:16:48.819 fused_ordering(304) 00:16:48.819 fused_ordering(305) 00:16:48.819 fused_ordering(306) 00:16:48.819 fused_ordering(307) 00:16:48.819 fused_ordering(308) 00:16:48.819 fused_ordering(309) 00:16:48.819 fused_ordering(310) 00:16:48.819 fused_ordering(311) 00:16:48.819 fused_ordering(312) 00:16:48.819 fused_ordering(313) 00:16:48.819 fused_ordering(314) 00:16:48.819 fused_ordering(315) 00:16:48.819 fused_ordering(316) 00:16:48.819 fused_ordering(317) 00:16:48.819 fused_ordering(318) 00:16:48.819 fused_ordering(319) 00:16:48.819 fused_ordering(320) 00:16:48.819 fused_ordering(321) 00:16:48.819 fused_ordering(322) 00:16:48.820 fused_ordering(323) 00:16:48.820 fused_ordering(324) 00:16:48.820 fused_ordering(325) 00:16:48.820 fused_ordering(326) 00:16:48.820 fused_ordering(327) 00:16:48.820 fused_ordering(328) 00:16:48.820 fused_ordering(329) 00:16:48.820 fused_ordering(330) 00:16:48.820 fused_ordering(331) 00:16:48.820 fused_ordering(332) 00:16:48.820 fused_ordering(333) 00:16:48.820 fused_ordering(334) 00:16:48.820 fused_ordering(335) 00:16:48.820 fused_ordering(336) 00:16:48.820 fused_ordering(337) 00:16:48.820 fused_ordering(338) 00:16:48.820 fused_ordering(339) 00:16:48.820 fused_ordering(340) 00:16:48.820 fused_ordering(341) 00:16:48.820 fused_ordering(342) 00:16:48.820 fused_ordering(343) 00:16:48.820 fused_ordering(344) 00:16:48.820 fused_ordering(345) 00:16:48.820 fused_ordering(346) 00:16:48.820 fused_ordering(347) 00:16:48.820 fused_ordering(348) 00:16:48.820 fused_ordering(349) 00:16:48.820 fused_ordering(350) 00:16:48.820 fused_ordering(351) 00:16:48.820 fused_ordering(352) 00:16:48.820 fused_ordering(353) 00:16:48.820 fused_ordering(354) 00:16:48.820 fused_ordering(355) 00:16:48.820 fused_ordering(356) 00:16:48.820 fused_ordering(357) 00:16:48.820 fused_ordering(358) 00:16:48.820 fused_ordering(359) 00:16:48.820 fused_ordering(360) 00:16:48.820 fused_ordering(361) 00:16:48.820 fused_ordering(362) 00:16:48.820 fused_ordering(363) 00:16:48.820 fused_ordering(364) 00:16:48.820 fused_ordering(365) 00:16:48.820 fused_ordering(366) 00:16:48.820 fused_ordering(367) 00:16:48.820 fused_ordering(368) 00:16:48.820 fused_ordering(369) 00:16:48.820 fused_ordering(370) 00:16:48.820 fused_ordering(371) 00:16:48.820 fused_ordering(372) 00:16:48.820 fused_ordering(373) 00:16:48.820 fused_ordering(374) 00:16:48.820 fused_ordering(375) 00:16:48.820 fused_ordering(376) 00:16:48.820 fused_ordering(377) 00:16:48.820 fused_ordering(378) 00:16:48.820 fused_ordering(379) 00:16:48.820 fused_ordering(380) 00:16:48.820 fused_ordering(381) 00:16:48.820 fused_ordering(382) 00:16:48.820 fused_ordering(383) 00:16:48.820 fused_ordering(384) 00:16:48.820 fused_ordering(385) 00:16:48.820 fused_ordering(386) 00:16:48.820 fused_ordering(387) 00:16:48.820 fused_ordering(388) 00:16:48.820 fused_ordering(389) 00:16:48.820 fused_ordering(390) 00:16:48.820 fused_ordering(391) 00:16:48.820 fused_ordering(392) 00:16:48.820 fused_ordering(393) 00:16:48.820 fused_ordering(394) 00:16:48.820 fused_ordering(395) 00:16:48.820 fused_ordering(396) 00:16:48.820 fused_ordering(397) 00:16:48.820 fused_ordering(398) 00:16:48.820 fused_ordering(399) 00:16:48.820 fused_ordering(400) 00:16:48.820 fused_ordering(401) 00:16:48.820 fused_ordering(402) 00:16:48.820 fused_ordering(403) 00:16:48.820 fused_ordering(404) 00:16:48.820 fused_ordering(405) 00:16:48.820 fused_ordering(406) 00:16:48.820 fused_ordering(407) 00:16:48.820 fused_ordering(408) 00:16:48.820 fused_ordering(409) 00:16:48.820 fused_ordering(410) 00:16:49.079 fused_ordering(411) 00:16:49.079 fused_ordering(412) 00:16:49.079 fused_ordering(413) 00:16:49.079 fused_ordering(414) 00:16:49.079 fused_ordering(415) 00:16:49.079 fused_ordering(416) 00:16:49.079 fused_ordering(417) 00:16:49.079 fused_ordering(418) 00:16:49.079 fused_ordering(419) 00:16:49.079 fused_ordering(420) 00:16:49.079 fused_ordering(421) 00:16:49.079 fused_ordering(422) 00:16:49.079 fused_ordering(423) 00:16:49.079 fused_ordering(424) 00:16:49.079 fused_ordering(425) 00:16:49.079 fused_ordering(426) 00:16:49.079 fused_ordering(427) 00:16:49.079 fused_ordering(428) 00:16:49.079 fused_ordering(429) 00:16:49.079 fused_ordering(430) 00:16:49.079 fused_ordering(431) 00:16:49.079 fused_ordering(432) 00:16:49.079 fused_ordering(433) 00:16:49.079 fused_ordering(434) 00:16:49.079 fused_ordering(435) 00:16:49.079 fused_ordering(436) 00:16:49.079 fused_ordering(437) 00:16:49.079 fused_ordering(438) 00:16:49.079 fused_ordering(439) 00:16:49.079 fused_ordering(440) 00:16:49.079 fused_ordering(441) 00:16:49.079 fused_ordering(442) 00:16:49.079 fused_ordering(443) 00:16:49.079 fused_ordering(444) 00:16:49.079 fused_ordering(445) 00:16:49.079 fused_ordering(446) 00:16:49.079 fused_ordering(447) 00:16:49.079 fused_ordering(448) 00:16:49.079 fused_ordering(449) 00:16:49.079 fused_ordering(450) 00:16:49.079 fused_ordering(451) 00:16:49.079 fused_ordering(452) 00:16:49.079 fused_ordering(453) 00:16:49.079 fused_ordering(454) 00:16:49.079 fused_ordering(455) 00:16:49.079 fused_ordering(456) 00:16:49.079 fused_ordering(457) 00:16:49.079 fused_ordering(458) 00:16:49.079 fused_ordering(459) 00:16:49.079 fused_ordering(460) 00:16:49.079 fused_ordering(461) 00:16:49.079 fused_ordering(462) 00:16:49.079 fused_ordering(463) 00:16:49.079 fused_ordering(464) 00:16:49.079 fused_ordering(465) 00:16:49.079 fused_ordering(466) 00:16:49.079 fused_ordering(467) 00:16:49.079 fused_ordering(468) 00:16:49.079 fused_ordering(469) 00:16:49.079 fused_ordering(470) 00:16:49.079 fused_ordering(471) 00:16:49.079 fused_ordering(472) 00:16:49.079 fused_ordering(473) 00:16:49.079 fused_ordering(474) 00:16:49.079 fused_ordering(475) 00:16:49.079 fused_ordering(476) 00:16:49.079 fused_ordering(477) 00:16:49.079 fused_ordering(478) 00:16:49.079 fused_ordering(479) 00:16:49.079 fused_ordering(480) 00:16:49.079 fused_ordering(481) 00:16:49.079 fused_ordering(482) 00:16:49.079 fused_ordering(483) 00:16:49.079 fused_ordering(484) 00:16:49.079 fused_ordering(485) 00:16:49.079 fused_ordering(486) 00:16:49.079 fused_ordering(487) 00:16:49.079 fused_ordering(488) 00:16:49.079 fused_ordering(489) 00:16:49.079 fused_ordering(490) 00:16:49.079 fused_ordering(491) 00:16:49.079 fused_ordering(492) 00:16:49.079 fused_ordering(493) 00:16:49.079 fused_ordering(494) 00:16:49.079 fused_ordering(495) 00:16:49.079 fused_ordering(496) 00:16:49.079 fused_ordering(497) 00:16:49.079 fused_ordering(498) 00:16:49.079 fused_ordering(499) 00:16:49.079 fused_ordering(500) 00:16:49.079 fused_ordering(501) 00:16:49.079 fused_ordering(502) 00:16:49.079 fused_ordering(503) 00:16:49.079 fused_ordering(504) 00:16:49.079 fused_ordering(505) 00:16:49.079 fused_ordering(506) 00:16:49.079 fused_ordering(507) 00:16:49.079 fused_ordering(508) 00:16:49.079 fused_ordering(509) 00:16:49.079 fused_ordering(510) 00:16:49.079 fused_ordering(511) 00:16:49.079 fused_ordering(512) 00:16:49.079 fused_ordering(513) 00:16:49.079 fused_ordering(514) 00:16:49.079 fused_ordering(515) 00:16:49.079 fused_ordering(516) 00:16:49.079 fused_ordering(517) 00:16:49.079 fused_ordering(518) 00:16:49.079 fused_ordering(519) 00:16:49.079 fused_ordering(520) 00:16:49.079 fused_ordering(521) 00:16:49.079 fused_ordering(522) 00:16:49.079 fused_ordering(523) 00:16:49.079 fused_ordering(524) 00:16:49.079 fused_ordering(525) 00:16:49.079 fused_ordering(526) 00:16:49.079 fused_ordering(527) 00:16:49.079 fused_ordering(528) 00:16:49.079 fused_ordering(529) 00:16:49.079 fused_ordering(530) 00:16:49.079 fused_ordering(531) 00:16:49.079 fused_ordering(532) 00:16:49.079 fused_ordering(533) 00:16:49.079 fused_ordering(534) 00:16:49.079 fused_ordering(535) 00:16:49.079 fused_ordering(536) 00:16:49.079 fused_ordering(537) 00:16:49.079 fused_ordering(538) 00:16:49.079 fused_ordering(539) 00:16:49.079 fused_ordering(540) 00:16:49.079 fused_ordering(541) 00:16:49.079 fused_ordering(542) 00:16:49.079 fused_ordering(543) 00:16:49.079 fused_ordering(544) 00:16:49.079 fused_ordering(545) 00:16:49.079 fused_ordering(546) 00:16:49.079 fused_ordering(547) 00:16:49.080 fused_ordering(548) 00:16:49.080 fused_ordering(549) 00:16:49.080 fused_ordering(550) 00:16:49.080 fused_ordering(551) 00:16:49.080 fused_ordering(552) 00:16:49.080 fused_ordering(553) 00:16:49.080 fused_ordering(554) 00:16:49.080 fused_ordering(555) 00:16:49.080 fused_ordering(556) 00:16:49.080 fused_ordering(557) 00:16:49.080 fused_ordering(558) 00:16:49.080 fused_ordering(559) 00:16:49.080 fused_ordering(560) 00:16:49.080 fused_ordering(561) 00:16:49.080 fused_ordering(562) 00:16:49.080 fused_ordering(563) 00:16:49.080 fused_ordering(564) 00:16:49.080 fused_ordering(565) 00:16:49.080 fused_ordering(566) 00:16:49.080 fused_ordering(567) 00:16:49.080 fused_ordering(568) 00:16:49.080 fused_ordering(569) 00:16:49.080 fused_ordering(570) 00:16:49.080 fused_ordering(571) 00:16:49.080 fused_ordering(572) 00:16:49.080 fused_ordering(573) 00:16:49.080 fused_ordering(574) 00:16:49.080 fused_ordering(575) 00:16:49.080 fused_ordering(576) 00:16:49.080 fused_ordering(577) 00:16:49.080 fused_ordering(578) 00:16:49.080 fused_ordering(579) 00:16:49.080 fused_ordering(580) 00:16:49.080 fused_ordering(581) 00:16:49.080 fused_ordering(582) 00:16:49.080 fused_ordering(583) 00:16:49.080 fused_ordering(584) 00:16:49.080 fused_ordering(585) 00:16:49.080 fused_ordering(586) 00:16:49.080 fused_ordering(587) 00:16:49.080 fused_ordering(588) 00:16:49.080 fused_ordering(589) 00:16:49.080 fused_ordering(590) 00:16:49.080 fused_ordering(591) 00:16:49.080 fused_ordering(592) 00:16:49.080 fused_ordering(593) 00:16:49.080 fused_ordering(594) 00:16:49.080 fused_ordering(595) 00:16:49.080 fused_ordering(596) 00:16:49.080 fused_ordering(597) 00:16:49.080 fused_ordering(598) 00:16:49.080 fused_ordering(599) 00:16:49.080 fused_ordering(600) 00:16:49.080 fused_ordering(601) 00:16:49.080 fused_ordering(602) 00:16:49.080 fused_ordering(603) 00:16:49.080 fused_ordering(604) 00:16:49.080 fused_ordering(605) 00:16:49.080 fused_ordering(606) 00:16:49.080 fused_ordering(607) 00:16:49.080 fused_ordering(608) 00:16:49.080 fused_ordering(609) 00:16:49.080 fused_ordering(610) 00:16:49.080 fused_ordering(611) 00:16:49.080 fused_ordering(612) 00:16:49.080 fused_ordering(613) 00:16:49.080 fused_ordering(614) 00:16:49.080 fused_ordering(615) 00:16:49.648 fused_ordering(616) 00:16:49.648 fused_ordering(617) 00:16:49.648 fused_ordering(618) 00:16:49.648 fused_ordering(619) 00:16:49.648 fused_ordering(620) 00:16:49.648 fused_ordering(621) 00:16:49.648 fused_ordering(622) 00:16:49.648 fused_ordering(623) 00:16:49.648 fused_ordering(624) 00:16:49.648 fused_ordering(625) 00:16:49.648 fused_ordering(626) 00:16:49.648 fused_ordering(627) 00:16:49.648 fused_ordering(628) 00:16:49.648 fused_ordering(629) 00:16:49.648 fused_ordering(630) 00:16:49.648 fused_ordering(631) 00:16:49.648 fused_ordering(632) 00:16:49.648 fused_ordering(633) 00:16:49.648 fused_ordering(634) 00:16:49.648 fused_ordering(635) 00:16:49.648 fused_ordering(636) 00:16:49.648 fused_ordering(637) 00:16:49.648 fused_ordering(638) 00:16:49.648 fused_ordering(639) 00:16:49.648 fused_ordering(640) 00:16:49.648 fused_ordering(641) 00:16:49.648 fused_ordering(642) 00:16:49.648 fused_ordering(643) 00:16:49.648 fused_ordering(644) 00:16:49.648 fused_ordering(645) 00:16:49.648 fused_ordering(646) 00:16:49.648 fused_ordering(647) 00:16:49.648 fused_ordering(648) 00:16:49.648 fused_ordering(649) 00:16:49.648 fused_ordering(650) 00:16:49.648 fused_ordering(651) 00:16:49.648 fused_ordering(652) 00:16:49.648 fused_ordering(653) 00:16:49.648 fused_ordering(654) 00:16:49.648 fused_ordering(655) 00:16:49.648 fused_ordering(656) 00:16:49.648 fused_ordering(657) 00:16:49.648 fused_ordering(658) 00:16:49.648 fused_ordering(659) 00:16:49.648 fused_ordering(660) 00:16:49.648 fused_ordering(661) 00:16:49.648 fused_ordering(662) 00:16:49.648 fused_ordering(663) 00:16:49.648 fused_ordering(664) 00:16:49.648 fused_ordering(665) 00:16:49.648 fused_ordering(666) 00:16:49.648 fused_ordering(667) 00:16:49.648 fused_ordering(668) 00:16:49.648 fused_ordering(669) 00:16:49.648 fused_ordering(670) 00:16:49.648 fused_ordering(671) 00:16:49.648 fused_ordering(672) 00:16:49.648 fused_ordering(673) 00:16:49.648 fused_ordering(674) 00:16:49.648 fused_ordering(675) 00:16:49.648 fused_ordering(676) 00:16:49.648 fused_ordering(677) 00:16:49.648 fused_ordering(678) 00:16:49.648 fused_ordering(679) 00:16:49.648 fused_ordering(680) 00:16:49.648 fused_ordering(681) 00:16:49.648 fused_ordering(682) 00:16:49.648 fused_ordering(683) 00:16:49.648 fused_ordering(684) 00:16:49.648 fused_ordering(685) 00:16:49.648 fused_ordering(686) 00:16:49.648 fused_ordering(687) 00:16:49.648 fused_ordering(688) 00:16:49.648 fused_ordering(689) 00:16:49.648 fused_ordering(690) 00:16:49.648 fused_ordering(691) 00:16:49.648 fused_ordering(692) 00:16:49.648 fused_ordering(693) 00:16:49.648 fused_ordering(694) 00:16:49.648 fused_ordering(695) 00:16:49.648 fused_ordering(696) 00:16:49.648 fused_ordering(697) 00:16:49.648 fused_ordering(698) 00:16:49.648 fused_ordering(699) 00:16:49.648 fused_ordering(700) 00:16:49.648 fused_ordering(701) 00:16:49.648 fused_ordering(702) 00:16:49.648 fused_ordering(703) 00:16:49.648 fused_ordering(704) 00:16:49.648 fused_ordering(705) 00:16:49.648 fused_ordering(706) 00:16:49.648 fused_ordering(707) 00:16:49.648 fused_ordering(708) 00:16:49.648 fused_ordering(709) 00:16:49.648 fused_ordering(710) 00:16:49.648 fused_ordering(711) 00:16:49.648 fused_ordering(712) 00:16:49.648 fused_ordering(713) 00:16:49.648 fused_ordering(714) 00:16:49.648 fused_ordering(715) 00:16:49.648 fused_ordering(716) 00:16:49.648 fused_ordering(717) 00:16:49.648 fused_ordering(718) 00:16:49.648 fused_ordering(719) 00:16:49.648 fused_ordering(720) 00:16:49.648 fused_ordering(721) 00:16:49.648 fused_ordering(722) 00:16:49.648 fused_ordering(723) 00:16:49.648 fused_ordering(724) 00:16:49.648 fused_ordering(725) 00:16:49.648 fused_ordering(726) 00:16:49.648 fused_ordering(727) 00:16:49.648 fused_ordering(728) 00:16:49.648 fused_ordering(729) 00:16:49.648 fused_ordering(730) 00:16:49.648 fused_ordering(731) 00:16:49.648 fused_ordering(732) 00:16:49.648 fused_ordering(733) 00:16:49.648 fused_ordering(734) 00:16:49.648 fused_ordering(735) 00:16:49.648 fused_ordering(736) 00:16:49.648 fused_ordering(737) 00:16:49.648 fused_ordering(738) 00:16:49.648 fused_ordering(739) 00:16:49.648 fused_ordering(740) 00:16:49.648 fused_ordering(741) 00:16:49.648 fused_ordering(742) 00:16:49.648 fused_ordering(743) 00:16:49.648 fused_ordering(744) 00:16:49.648 fused_ordering(745) 00:16:49.648 fused_ordering(746) 00:16:49.648 fused_ordering(747) 00:16:49.648 fused_ordering(748) 00:16:49.648 fused_ordering(749) 00:16:49.648 fused_ordering(750) 00:16:49.648 fused_ordering(751) 00:16:49.648 fused_ordering(752) 00:16:49.648 fused_ordering(753) 00:16:49.648 fused_ordering(754) 00:16:49.648 fused_ordering(755) 00:16:49.648 fused_ordering(756) 00:16:49.648 fused_ordering(757) 00:16:49.648 fused_ordering(758) 00:16:49.648 fused_ordering(759) 00:16:49.648 fused_ordering(760) 00:16:49.648 fused_ordering(761) 00:16:49.648 fused_ordering(762) 00:16:49.648 fused_ordering(763) 00:16:49.648 fused_ordering(764) 00:16:49.648 fused_ordering(765) 00:16:49.648 fused_ordering(766) 00:16:49.648 fused_ordering(767) 00:16:49.648 fused_ordering(768) 00:16:49.648 fused_ordering(769) 00:16:49.648 fused_ordering(770) 00:16:49.648 fused_ordering(771) 00:16:49.648 fused_ordering(772) 00:16:49.648 fused_ordering(773) 00:16:49.648 fused_ordering(774) 00:16:49.648 fused_ordering(775) 00:16:49.648 fused_ordering(776) 00:16:49.648 fused_ordering(777) 00:16:49.648 fused_ordering(778) 00:16:49.648 fused_ordering(779) 00:16:49.648 fused_ordering(780) 00:16:49.648 fused_ordering(781) 00:16:49.648 fused_ordering(782) 00:16:49.648 fused_ordering(783) 00:16:49.648 fused_ordering(784) 00:16:49.648 fused_ordering(785) 00:16:49.648 fused_ordering(786) 00:16:49.648 fused_ordering(787) 00:16:49.648 fused_ordering(788) 00:16:49.648 fused_ordering(789) 00:16:49.648 fused_ordering(790) 00:16:49.648 fused_ordering(791) 00:16:49.648 fused_ordering(792) 00:16:49.648 fused_ordering(793) 00:16:49.648 fused_ordering(794) 00:16:49.648 fused_ordering(795) 00:16:49.648 fused_ordering(796) 00:16:49.648 fused_ordering(797) 00:16:49.648 fused_ordering(798) 00:16:49.648 fused_ordering(799) 00:16:49.648 fused_ordering(800) 00:16:49.648 fused_ordering(801) 00:16:49.648 fused_ordering(802) 00:16:49.648 fused_ordering(803) 00:16:49.648 fused_ordering(804) 00:16:49.648 fused_ordering(805) 00:16:49.648 fused_ordering(806) 00:16:49.648 fused_ordering(807) 00:16:49.648 fused_ordering(808) 00:16:49.648 fused_ordering(809) 00:16:49.648 fused_ordering(810) 00:16:49.648 fused_ordering(811) 00:16:49.648 fused_ordering(812) 00:16:49.648 fused_ordering(813) 00:16:49.648 fused_ordering(814) 00:16:49.648 fused_ordering(815) 00:16:49.648 fused_ordering(816) 00:16:49.648 fused_ordering(817) 00:16:49.648 fused_ordering(818) 00:16:49.648 fused_ordering(819) 00:16:49.648 fused_ordering(820) 00:16:50.217 fused_ordering(821) 00:16:50.217 fused_ordering(822) 00:16:50.217 fused_ordering(823) 00:16:50.217 fused_ordering(824) 00:16:50.217 fused_ordering(825) 00:16:50.217 fused_ordering(826) 00:16:50.217 fused_ordering(827) 00:16:50.217 fused_ordering(828) 00:16:50.217 fused_ordering(829) 00:16:50.217 fused_ordering(830) 00:16:50.217 fused_ordering(831) 00:16:50.217 fused_ordering(832) 00:16:50.217 fused_ordering(833) 00:16:50.217 fused_ordering(834) 00:16:50.217 fused_ordering(835) 00:16:50.217 fused_ordering(836) 00:16:50.217 fused_ordering(837) 00:16:50.217 fused_ordering(838) 00:16:50.217 fused_ordering(839) 00:16:50.217 fused_ordering(840) 00:16:50.217 fused_ordering(841) 00:16:50.217 fused_ordering(842) 00:16:50.217 fused_ordering(843) 00:16:50.217 fused_ordering(844) 00:16:50.217 fused_ordering(845) 00:16:50.217 fused_ordering(846) 00:16:50.217 fused_ordering(847) 00:16:50.217 fused_ordering(848) 00:16:50.217 fused_ordering(849) 00:16:50.217 fused_ordering(850) 00:16:50.217 fused_ordering(851) 00:16:50.217 fused_ordering(852) 00:16:50.217 fused_ordering(853) 00:16:50.217 fused_ordering(854) 00:16:50.217 fused_ordering(855) 00:16:50.217 fused_ordering(856) 00:16:50.217 fused_ordering(857) 00:16:50.217 fused_ordering(858) 00:16:50.217 fused_ordering(859) 00:16:50.217 fused_ordering(860) 00:16:50.217 fused_ordering(861) 00:16:50.217 fused_ordering(862) 00:16:50.217 fused_ordering(863) 00:16:50.217 fused_ordering(864) 00:16:50.217 fused_ordering(865) 00:16:50.217 fused_ordering(866) 00:16:50.217 fused_ordering(867) 00:16:50.217 fused_ordering(868) 00:16:50.217 fused_ordering(869) 00:16:50.217 fused_ordering(870) 00:16:50.217 fused_ordering(871) 00:16:50.217 fused_ordering(872) 00:16:50.217 fused_ordering(873) 00:16:50.217 fused_ordering(874) 00:16:50.217 fused_ordering(875) 00:16:50.217 fused_ordering(876) 00:16:50.217 fused_ordering(877) 00:16:50.217 fused_ordering(878) 00:16:50.217 fused_ordering(879) 00:16:50.217 fused_ordering(880) 00:16:50.217 fused_ordering(881) 00:16:50.217 fused_ordering(882) 00:16:50.217 fused_ordering(883) 00:16:50.217 fused_ordering(884) 00:16:50.217 fused_ordering(885) 00:16:50.217 fused_ordering(886) 00:16:50.217 fused_ordering(887) 00:16:50.217 fused_ordering(888) 00:16:50.217 fused_ordering(889) 00:16:50.217 fused_ordering(890) 00:16:50.217 fused_ordering(891) 00:16:50.217 fused_ordering(892) 00:16:50.217 fused_ordering(893) 00:16:50.217 fused_ordering(894) 00:16:50.217 fused_ordering(895) 00:16:50.217 fused_ordering(896) 00:16:50.217 fused_ordering(897) 00:16:50.217 fused_ordering(898) 00:16:50.217 fused_ordering(899) 00:16:50.217 fused_ordering(900) 00:16:50.217 fused_ordering(901) 00:16:50.217 fused_ordering(902) 00:16:50.217 fused_ordering(903) 00:16:50.217 fused_ordering(904) 00:16:50.217 fused_ordering(905) 00:16:50.217 fused_ordering(906) 00:16:50.217 fused_ordering(907) 00:16:50.217 fused_ordering(908) 00:16:50.217 fused_ordering(909) 00:16:50.217 fused_ordering(910) 00:16:50.217 fused_ordering(911) 00:16:50.217 fused_ordering(912) 00:16:50.217 fused_ordering(913) 00:16:50.217 fused_ordering(914) 00:16:50.217 fused_ordering(915) 00:16:50.217 fused_ordering(916) 00:16:50.217 fused_ordering(917) 00:16:50.217 fused_ordering(918) 00:16:50.217 fused_ordering(919) 00:16:50.217 fused_ordering(920) 00:16:50.217 fused_ordering(921) 00:16:50.217 fused_ordering(922) 00:16:50.217 fused_ordering(923) 00:16:50.217 fused_ordering(924) 00:16:50.217 fused_ordering(925) 00:16:50.217 fused_ordering(926) 00:16:50.217 fused_ordering(927) 00:16:50.217 fused_ordering(928) 00:16:50.217 fused_ordering(929) 00:16:50.217 fused_ordering(930) 00:16:50.217 fused_ordering(931) 00:16:50.217 fused_ordering(932) 00:16:50.217 fused_ordering(933) 00:16:50.217 fused_ordering(934) 00:16:50.217 fused_ordering(935) 00:16:50.217 fused_ordering(936) 00:16:50.217 fused_ordering(937) 00:16:50.217 fused_ordering(938) 00:16:50.217 fused_ordering(939) 00:16:50.217 fused_ordering(940) 00:16:50.217 fused_ordering(941) 00:16:50.217 fused_ordering(942) 00:16:50.217 fused_ordering(943) 00:16:50.217 fused_ordering(944) 00:16:50.217 fused_ordering(945) 00:16:50.217 fused_ordering(946) 00:16:50.217 fused_ordering(947) 00:16:50.217 fused_ordering(948) 00:16:50.217 fused_ordering(949) 00:16:50.217 fused_ordering(950) 00:16:50.217 fused_ordering(951) 00:16:50.217 fused_ordering(952) 00:16:50.217 fused_ordering(953) 00:16:50.217 fused_ordering(954) 00:16:50.217 fused_ordering(955) 00:16:50.217 fused_ordering(956) 00:16:50.217 fused_ordering(957) 00:16:50.217 fused_ordering(958) 00:16:50.217 fused_ordering(959) 00:16:50.217 fused_ordering(960) 00:16:50.217 fused_ordering(961) 00:16:50.217 fused_ordering(962) 00:16:50.217 fused_ordering(963) 00:16:50.217 fused_ordering(964) 00:16:50.217 fused_ordering(965) 00:16:50.217 fused_ordering(966) 00:16:50.217 fused_ordering(967) 00:16:50.217 fused_ordering(968) 00:16:50.217 fused_ordering(969) 00:16:50.217 fused_ordering(970) 00:16:50.217 fused_ordering(971) 00:16:50.217 fused_ordering(972) 00:16:50.217 fused_ordering(973) 00:16:50.218 fused_ordering(974) 00:16:50.218 fused_ordering(975) 00:16:50.218 fused_ordering(976) 00:16:50.218 fused_ordering(977) 00:16:50.218 fused_ordering(978) 00:16:50.218 fused_ordering(979) 00:16:50.218 fused_ordering(980) 00:16:50.218 fused_ordering(981) 00:16:50.218 fused_ordering(982) 00:16:50.218 fused_ordering(983) 00:16:50.218 fused_ordering(984) 00:16:50.218 fused_ordering(985) 00:16:50.218 fused_ordering(986) 00:16:50.218 fused_ordering(987) 00:16:50.218 fused_ordering(988) 00:16:50.218 fused_ordering(989) 00:16:50.218 fused_ordering(990) 00:16:50.218 fused_ordering(991) 00:16:50.218 fused_ordering(992) 00:16:50.218 fused_ordering(993) 00:16:50.218 fused_ordering(994) 00:16:50.218 fused_ordering(995) 00:16:50.218 fused_ordering(996) 00:16:50.218 fused_ordering(997) 00:16:50.218 fused_ordering(998) 00:16:50.218 fused_ordering(999) 00:16:50.218 fused_ordering(1000) 00:16:50.218 fused_ordering(1001) 00:16:50.218 fused_ordering(1002) 00:16:50.218 fused_ordering(1003) 00:16:50.218 fused_ordering(1004) 00:16:50.218 fused_ordering(1005) 00:16:50.218 fused_ordering(1006) 00:16:50.218 fused_ordering(1007) 00:16:50.218 fused_ordering(1008) 00:16:50.218 fused_ordering(1009) 00:16:50.218 fused_ordering(1010) 00:16:50.218 fused_ordering(1011) 00:16:50.218 fused_ordering(1012) 00:16:50.218 fused_ordering(1013) 00:16:50.218 fused_ordering(1014) 00:16:50.218 fused_ordering(1015) 00:16:50.218 fused_ordering(1016) 00:16:50.218 fused_ordering(1017) 00:16:50.218 fused_ordering(1018) 00:16:50.218 fused_ordering(1019) 00:16:50.218 fused_ordering(1020) 00:16:50.218 fused_ordering(1021) 00:16:50.218 fused_ordering(1022) 00:16:50.218 fused_ordering(1023) 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.218 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.477 rmmod nvme_tcp 00:16:50.477 rmmod nvme_fabrics 00:16:50.477 rmmod nvme_keyring 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 91732 ']' 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 91732 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 91732 ']' 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 91732 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91732 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:50.477 killing process with pid 91732 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91732' 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 91732 00:16:50.477 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 91732 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.736 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:16:50.994 ************************************ 00:16:50.994 END TEST nvmf_fused_ordering 00:16:50.994 ************************************ 00:16:50.994 00:16:50.994 real 0m4.987s 00:16:50.994 user 0m5.650s 00:16:50.994 sys 0m1.695s 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:50.994 17:37:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:50.995 17:37:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:50.995 17:37:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.995 17:37:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.995 ************************************ 00:16:50.995 START TEST nvmf_ns_masking 00:16:50.995 ************************************ 00:16:50.995 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:51.254 * Looking for test storage... 00:16:51.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.254 --rc genhtml_branch_coverage=1 00:16:51.254 --rc genhtml_function_coverage=1 00:16:51.254 --rc genhtml_legend=1 00:16:51.254 --rc geninfo_all_blocks=1 00:16:51.254 --rc geninfo_unexecuted_blocks=1 00:16:51.254 00:16:51.254 ' 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.254 --rc genhtml_branch_coverage=1 00:16:51.254 --rc genhtml_function_coverage=1 00:16:51.254 --rc genhtml_legend=1 00:16:51.254 --rc geninfo_all_blocks=1 00:16:51.254 --rc geninfo_unexecuted_blocks=1 00:16:51.254 00:16:51.254 ' 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.254 --rc genhtml_branch_coverage=1 00:16:51.254 --rc genhtml_function_coverage=1 00:16:51.254 --rc genhtml_legend=1 00:16:51.254 --rc geninfo_all_blocks=1 00:16:51.254 --rc geninfo_unexecuted_blocks=1 00:16:51.254 00:16:51.254 ' 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.254 --rc genhtml_branch_coverage=1 00:16:51.254 --rc genhtml_function_coverage=1 00:16:51.254 --rc genhtml_legend=1 00:16:51.254 --rc geninfo_all_blocks=1 00:16:51.254 --rc geninfo_unexecuted_blocks=1 00:16:51.254 00:16:51.254 ' 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.254 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:51.255 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5ebb1e31-7f18-4585-8a5d-34d406a53e77 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7a01128b-f7b9-46fb-af81-67f4293ed01f 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dfe8021a-79f6-4731-ba78-7739ccb26624 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.255 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:51.255 Cannot find device "nvmf_init_br" 00:16:51.256 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:16:51.256 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:51.514 Cannot find device "nvmf_init_br2" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:51.514 Cannot find device "nvmf_tgt_br" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.514 Cannot find device "nvmf_tgt_br2" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:51.514 Cannot find device "nvmf_init_br" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:51.514 Cannot find device "nvmf_init_br2" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:51.514 Cannot find device "nvmf_tgt_br" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:51.514 Cannot find device "nvmf_tgt_br2" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:51.514 Cannot find device "nvmf_br" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:51.514 Cannot find device "nvmf_init_if" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:51.514 Cannot find device "nvmf_init_if2" 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.514 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:51.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:16:51.774 00:16:51.774 --- 10.0.0.3 ping statistics --- 00:16:51.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.774 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:51.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:51.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:16:51.774 00:16:51.774 --- 10.0.0.4 ping statistics --- 00:16:51.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.774 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:51.774 00:16:51.774 --- 10.0.0.1 ping statistics --- 00:16:51.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.774 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:51.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:51.774 00:16:51.774 --- 10.0.0.2 ping statistics --- 00:16:51.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.774 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # return 0 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=92052 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 92052 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 92052 ']' 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.774 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:51.774 [2024-11-20 17:37:33.479028] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:51.774 [2024-11-20 17:37:33.479172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.064 [2024-11-20 17:37:33.621479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.064 [2024-11-20 17:37:33.760019] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.065 [2024-11-20 17:37:33.760093] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.065 [2024-11-20 17:37:33.760106] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.065 [2024-11-20 17:37:33.760115] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.065 [2024-11-20 17:37:33.760123] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.065 [2024-11-20 17:37:33.760154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.032 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.032 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:53.032 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:53.032 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.032 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:53.032 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.032 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:53.291 [2024-11-20 17:37:34.932473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.291 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:53.291 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:53.291 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:53.549 Malloc1 00:16:53.549 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:54.117 Malloc2 00:16:54.117 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:54.376 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:54.635 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:54.894 [2024-11-20 17:37:36.562570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:54.894 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:54.894 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dfe8021a-79f6-4731-ba78-7739ccb26624 -a 10.0.0.3 -s 4420 -i 4 00:16:55.153 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.153 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:55.153 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.153 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:55.153 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:57.055 [ 0]:0x1 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c996d160d784e2da3577096bbf3678e 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c996d160d784e2da3577096bbf3678e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.055 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:57.625 [ 0]:0x1 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c996d160d784e2da3577096bbf3678e 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c996d160d784e2da3577096bbf3678e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:57.625 [ 1]:0x2 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a91cfcf5ae24b2c98b699a3d4906853 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a91cfcf5ae24b2c98b699a3d4906853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.625 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.898 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:58.170 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:58.170 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dfe8021a-79f6-4731-ba78-7739ccb26624 -a 10.0.0.3 -s 4420 -i 4 00:16:58.428 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:58.428 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:58.428 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.428 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:58.428 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:58.428 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.332 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.332 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.332 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:00.332 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:00.591 [ 0]:0x2 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a91cfcf5ae24b2c98b699a3d4906853 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a91cfcf5ae24b2c98b699a3d4906853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.591 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:00.849 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:00.849 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:00.849 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.849 [ 0]:0x1 00:17:00.849 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:00.849 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c996d160d784e2da3577096bbf3678e 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c996d160d784e2da3577096bbf3678e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:01.108 [ 1]:0x2 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a91cfcf5ae24b2c98b699a3d4906853 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a91cfcf5ae24b2c98b699a3d4906853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:01.108 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:01.367 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.368 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.368 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.368 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:01.368 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:01.368 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:01.368 [ 0]:0x2 00:17:01.368 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:01.368 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:01.626 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a91cfcf5ae24b2c98b699a3d4906853 00:17:01.626 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a91cfcf5ae24b2c98b699a3d4906853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:01.626 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:01.626 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.626 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dfe8021a-79f6-4731-ba78-7739ccb26624 -a 10.0.0.3 -s 4420 -i 4 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:01.884 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:04.414 [ 0]:0x1 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c996d160d784e2da3577096bbf3678e 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c996d160d784e2da3577096bbf3678e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:04.414 [ 1]:0x2 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a91cfcf5ae24b2c98b699a3d4906853 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a91cfcf5ae24b2c98b699a3d4906853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.414 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.414 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:04.673 [ 0]:0x2 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a91cfcf5ae24b2c98b699a3d4906853 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a91cfcf5ae24b2c98b699a3d4906853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:04.673 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:04.931 [2024-11-20 17:37:46.529058] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:04.931 2024/11/20 17:37:46 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:17:04.931 request: 00:17:04.931 { 00:17:04.931 "method": "nvmf_ns_remove_host", 00:17:04.931 "params": { 00:17:04.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.931 "nsid": 2, 00:17:04.931 "host": "nqn.2016-06.io.spdk:host1" 00:17:04.931 } 00:17:04.931 } 00:17:04.931 Got JSON-RPC error response 00:17:04.931 GoRPCClient: error on JSON-RPC call 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.931 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:04.932 [ 0]:0x2 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a91cfcf5ae24b2c98b699a3d4906853 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a91cfcf5ae24b2c98b699a3d4906853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:04.932 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=92447 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 92447 /var/tmp/host.sock 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 92447 ']' 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.203 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:05.203 [2024-11-20 17:37:46.811556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:05.203 [2024-11-20 17:37:46.811671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92447 ] 00:17:05.203 [2024-11-20 17:37:46.949910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.477 [2024-11-20 17:37:47.082319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.410 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.410 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:06.410 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.410 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:06.975 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5ebb1e31-7f18-4585-8a5d-34d406a53e77 00:17:06.975 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:06.975 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5EBB1E317F1845858A5D34D406A53E77 -i 00:17:07.233 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7a01128b-f7b9-46fb-af81-67f4293ed01f 00:17:07.233 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:07.233 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7A01128BF7B946FBAF8167F4293ED01F -i 00:17:07.494 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:07.759 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:08.019 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:08.019 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:08.278 nvme0n1 00:17:08.538 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:08.538 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:08.797 nvme1n2 00:17:08.797 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:08.797 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:08.797 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:08.797 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:08.797 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:09.056 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:09.056 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:09.056 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:09.056 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:09.314 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5ebb1e31-7f18-4585-8a5d-34d406a53e77 == \5\e\b\b\1\e\3\1\-\7\f\1\8\-\4\5\8\5\-\8\a\5\d\-\3\4\d\4\0\6\a\5\3\e\7\7 ]] 00:17:09.314 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:09.314 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:09.314 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7a01128b-f7b9-46fb-af81-67f4293ed01f == \7\a\0\1\1\2\8\b\-\f\7\b\9\-\4\6\f\b\-\a\f\8\1\-\6\7\f\4\2\9\3\e\d\0\1\f ]] 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 92447 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 92447 ']' 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 92447 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92447 00:17:09.881 killing process with pid 92447 00:17:09.881 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:09.882 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:09.882 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92447' 00:17:09.882 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 92447 00:17:09.882 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 92447 00:17:10.448 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:10.706 rmmod nvme_tcp 00:17:10.706 rmmod nvme_fabrics 00:17:10.706 rmmod nvme_keyring 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 92052 ']' 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 92052 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 92052 ']' 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 92052 00:17:10.706 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:10.965 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:10.965 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92052 00:17:10.965 killing process with pid 92052 00:17:10.965 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:10.965 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:10.965 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92052' 00:17:10.965 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 92052 00:17:10.965 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 92052 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:11.224 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:17:11.482 00:17:11.482 real 0m20.328s 00:17:11.482 user 0m32.549s 00:17:11.482 sys 0m3.501s 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.482 ************************************ 00:17:11.482 END TEST nvmf_ns_masking 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:11.482 ************************************ 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.482 ************************************ 00:17:11.482 START TEST nvmf_vfio_user 00:17:11.482 ************************************ 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:11.482 * Looking for test storage... 00:17:11.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:11.482 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.742 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:11.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.742 --rc genhtml_branch_coverage=1 00:17:11.742 --rc genhtml_function_coverage=1 00:17:11.742 --rc genhtml_legend=1 00:17:11.742 --rc geninfo_all_blocks=1 00:17:11.743 --rc geninfo_unexecuted_blocks=1 00:17:11.743 00:17:11.743 ' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.743 --rc genhtml_branch_coverage=1 00:17:11.743 --rc genhtml_function_coverage=1 00:17:11.743 --rc genhtml_legend=1 00:17:11.743 --rc geninfo_all_blocks=1 00:17:11.743 --rc geninfo_unexecuted_blocks=1 00:17:11.743 00:17:11.743 ' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.743 --rc genhtml_branch_coverage=1 00:17:11.743 --rc genhtml_function_coverage=1 00:17:11.743 --rc genhtml_legend=1 00:17:11.743 --rc geninfo_all_blocks=1 00:17:11.743 --rc geninfo_unexecuted_blocks=1 00:17:11.743 00:17:11.743 ' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.743 --rc genhtml_branch_coverage=1 00:17:11.743 --rc genhtml_function_coverage=1 00:17:11.743 --rc genhtml_legend=1 00:17:11.743 --rc geninfo_all_blocks=1 00:17:11.743 --rc geninfo_unexecuted_blocks=1 00:17:11.743 00:17:11.743 ' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=92741 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:11.743 Process pid: 92741 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 92741' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 92741 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 92741 ']' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.743 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:11.743 [2024-11-20 17:37:53.463737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:11.743 [2024-11-20 17:37:53.463880] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.002 [2024-11-20 17:37:53.600750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.002 [2024-11-20 17:37:53.738428] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.002 [2024-11-20 17:37:53.738500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.002 [2024-11-20 17:37:53.738518] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.002 [2024-11-20 17:37:53.738531] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.002 [2024-11-20 17:37:53.738542] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.002 [2024-11-20 17:37:53.741109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.002 [2024-11-20 17:37:53.741257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.002 [2024-11-20 17:37:53.741863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.002 [2024-11-20 17:37:53.741922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.939 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.939 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:12.939 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:13.873 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:14.132 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:14.391 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:14.391 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:14.391 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:14.391 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:14.649 Malloc1 00:17:14.649 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:14.908 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:15.475 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:15.734 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:15.734 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:15.734 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:15.992 Malloc2 00:17:15.992 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:16.251 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:16.509 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:16.769 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:16.769 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:16.769 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:16.769 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:16.769 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:16.769 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:16.769 [2024-11-20 17:37:58.499101] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:16.769 [2024-11-20 17:37:58.499177] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92888 ] 00:17:17.031 [2024-11-20 17:37:58.642786] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:17.031 [2024-11-20 17:37:58.649454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:17.031 [2024-11-20 17:37:58.649510] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff0206d1000 00:17:17.031 [2024-11-20 17:37:58.650438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.651432] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.652431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.653439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.654440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.655479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.656448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.657443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:17.031 [2024-11-20 17:37:58.658460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:17.031 [2024-11-20 17:37:58.658487] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff01f945000 00:17:17.031 [2024-11-20 17:37:58.659747] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:17.031 [2024-11-20 17:37:58.680092] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:17.031 [2024-11-20 17:37:58.680154] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:17.031 [2024-11-20 17:37:58.682574] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:17.031 [2024-11-20 17:37:58.682649] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:17.031 [2024-11-20 17:37:58.682785] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:17.031 [2024-11-20 17:37:58.682818] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:17.031 [2024-11-20 17:37:58.682825] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:17.031 [2024-11-20 17:37:58.683536] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:17.031 [2024-11-20 17:37:58.683561] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:17.031 [2024-11-20 17:37:58.683573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:17.031 [2024-11-20 17:37:58.684539] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:17.031 [2024-11-20 17:37:58.684565] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:17.032 [2024-11-20 17:37:58.684577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:17.032 [2024-11-20 17:37:58.685544] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:17.032 [2024-11-20 17:37:58.685566] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:17.032 [2024-11-20 17:37:58.686546] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:17.032 [2024-11-20 17:37:58.686568] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:17.032 [2024-11-20 17:37:58.686574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:17.032 [2024-11-20 17:37:58.686584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:17.032 [2024-11-20 17:37:58.686690] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:17.032 [2024-11-20 17:37:58.686697] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:17.032 [2024-11-20 17:37:58.686703] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:17.032 [2024-11-20 17:37:58.687552] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:17.032 [2024-11-20 17:37:58.688559] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:17.032 [2024-11-20 17:37:58.689565] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:17.032 [2024-11-20 17:37:58.690555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:17.032 [2024-11-20 17:37:58.690690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:17.032 [2024-11-20 17:37:58.691574] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:17.032 [2024-11-20 17:37:58.691596] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:17.032 [2024-11-20 17:37:58.691604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691626] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:17.032 [2024-11-20 17:37:58.691644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691679] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:17.032 [2024-11-20 17:37:58.691687] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.032 [2024-11-20 17:37:58.691691] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.032 [2024-11-20 17:37:58.691712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.032 [2024-11-20 17:37:58.691789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:17.032 [2024-11-20 17:37:58.691804] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:17.032 [2024-11-20 17:37:58.691810] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:17.032 [2024-11-20 17:37:58.691814] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:17.032 [2024-11-20 17:37:58.691820] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:17.032 [2024-11-20 17:37:58.691826] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:17.032 [2024-11-20 17:37:58.691831] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:17.032 [2024-11-20 17:37:58.691836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691846] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:17.032 [2024-11-20 17:37:58.691878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:17.032 [2024-11-20 17:37:58.691891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.032 [2024-11-20 17:37:58.691901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.032 [2024-11-20 17:37:58.691911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.032 [2024-11-20 17:37:58.691921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.032 [2024-11-20 17:37:58.691926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:17.032 [2024-11-20 17:37:58.691964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:17.032 [2024-11-20 17:37:58.691972] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:17.032 [2024-11-20 17:37:58.691977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.691997] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.692007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:17.032 [2024-11-20 17:37:58.692021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:17.032 [2024-11-20 17:37:58.692102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.692116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.692126] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:17.032 [2024-11-20 17:37:58.692131] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:17.032 [2024-11-20 17:37:58.692135] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.032 [2024-11-20 17:37:58.692143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:17.032 [2024-11-20 17:37:58.692163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:17.032 [2024-11-20 17:37:58.692176] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:17.032 [2024-11-20 17:37:58.692190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.692200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.692208] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:17.032 [2024-11-20 17:37:58.692213] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.032 [2024-11-20 17:37:58.692217] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.032 [2024-11-20 17:37:58.692224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.032 [2024-11-20 17:37:58.692252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:17.032 [2024-11-20 17:37:58.692271] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:17.032 [2024-11-20 17:37:58.692281] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692289] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:17.033 [2024-11-20 17:37:58.692294] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.033 [2024-11-20 17:37:58.692298] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.033 [2024-11-20 17:37:58.692305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:17.033 [2024-11-20 17:37:58.692329] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692356] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692362] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692374] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:17.033 [2024-11-20 17:37:58.692379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:17.033 [2024-11-20 17:37:58.692385] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:17.033 [2024-11-20 17:37:58.692411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:17.033 [2024-11-20 17:37:58.692441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:17.033 [2024-11-20 17:37:58.692466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:17.033 [2024-11-20 17:37:58.692496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:17.033 [2024-11-20 17:37:58.692528] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:17.033 [2024-11-20 17:37:58.692535] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:17.033 [2024-11-20 17:37:58.692539] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:17.033 [2024-11-20 17:37:58.692542] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:17.033 [2024-11-20 17:37:58.692546] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:17.033 [2024-11-20 17:37:58.692554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:17.033 [2024-11-20 17:37:58.692562] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:17.033 [2024-11-20 17:37:58.692567] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:17.033 [2024-11-20 17:37:58.692571] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.033 [2024-11-20 17:37:58.692578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692586] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:17.033 [2024-11-20 17:37:58.692591] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:17.033 [2024-11-20 17:37:58.692595] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.033 [2024-11-20 17:37:58.692602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692611] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:17.033 [2024-11-20 17:37:58.692616] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:17.033 [2024-11-20 17:37:58.692619] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:17.033 [2024-11-20 17:37:58.692626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:17.033 [2024-11-20 17:37:58.692634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:17.033 [2024-11-20 17:37:58.692651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:17.033 ===================================================== 00:17:17.033 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:17.033 ===================================================== 00:17:17.033 Controller Capabilities/Features 00:17:17.033 ================================ 00:17:17.033 Vendor ID: 4e58 00:17:17.033 Subsystem Vendor ID: 4e58 00:17:17.033 Serial Number: SPDK1 00:17:17.033 Model Number: SPDK bdev Controller 00:17:17.033 Firmware Version: 24.09.1 00:17:17.033 Recommended Arb Burst: 6 00:17:17.033 IEEE OUI Identifier: 8d 6b 50 00:17:17.033 Multi-path I/O 00:17:17.033 May have multiple subsystem ports: Yes 00:17:17.033 May have multiple controllers: Yes 00:17:17.033 Associated with SR-IOV VF: No 00:17:17.033 Max Data Transfer Size: 131072 00:17:17.033 Max Number of Namespaces: 32 00:17:17.033 Max Number of I/O Queues: 127 00:17:17.033 NVMe Specification Version (VS): 1.3 00:17:17.033 NVMe Specification Version (Identify): 1.3 00:17:17.033 Maximum Queue Entries: 256 00:17:17.033 Contiguous Queues Required: Yes 00:17:17.033 Arbitration Mechanisms Supported 00:17:17.033 Weighted Round Robin: Not Supported 00:17:17.033 Vendor Specific: Not Supported 00:17:17.033 Reset Timeout: 15000 ms 00:17:17.033 Doorbell Stride: 4 bytes 00:17:17.033 NVM Subsystem Reset: Not Supported 00:17:17.033 Command Sets Supported 00:17:17.033 NVM Command Set: Supported 00:17:17.033 Boot Partition: Not Supported 00:17:17.033 Memory Page Size Minimum: 4096 bytes 00:17:17.033 Memory Page Size Maximum: 4096 bytes 00:17:17.033 Persistent Memory Region: Not Supported 00:17:17.033 Optional Asynchronous Events Supported 00:17:17.033 Namespace Attribute Notices: Supported 00:17:17.033 Firmware Activation Notices: Not Supported 00:17:17.033 ANA Change Notices: Not Supported 00:17:17.033 PLE Aggregate Log Change Notices: Not Supported 00:17:17.033 LBA Status Info Alert Notices: Not Supported 00:17:17.033 EGE Aggregate Log Change Notices: Not Supported 00:17:17.033 Normal NVM Subsystem Shutdown event: Not Supported 00:17:17.033 Zone Descriptor Change Notices: Not Supported 00:17:17.033 Discovery Log Change Notices: Not Supported 00:17:17.033 Controller Attributes 00:17:17.033 128-bit Host Identifier: Supported 00:17:17.033 Non-Operational Permissive Mode: Not Supported 00:17:17.033 NVM Sets: Not Supported 00:17:17.033 Read Recovery Levels: Not Supported 00:17:17.033 Endurance Groups: Not Supported 00:17:17.033 Predictable Latency Mode: Not Supported 00:17:17.033 Traffic Based Keep ALive: Not Supported 00:17:17.033 Namespace Granularity: Not Supported 00:17:17.033 SQ Associations: Not Supported 00:17:17.033 UUID List: Not Supported 00:17:17.033 Multi-Domain Subsystem: Not Supported 00:17:17.033 Fixed Capacity Management: Not Supported 00:17:17.033 Variable Capacity Management: Not Supported 00:17:17.033 Delete Endurance Group: Not Supported 00:17:17.033 Delete NVM Set: Not Supported 00:17:17.033 Extended LBA Formats Supported: Not Supported 00:17:17.033 Flexible Data Placement Supported: Not Supported 00:17:17.033 00:17:17.033 Controller Memory Buffer Support 00:17:17.033 ================================ 00:17:17.033 Supported: No 00:17:17.033 00:17:17.033 Persistent Memory Region Support 00:17:17.034 ================================ 00:17:17.034 Supported: No 00:17:17.034 00:17:17.034 Admin Command Set Attributes 00:17:17.034 ============================ 00:17:17.034 Security Send/Receive: Not Supported 00:17:17.034 Format NVM: Not Supported 00:17:17.034 Firmware Activate/Download: Not Supported 00:17:17.034 Namespace Management: Not Supported 00:17:17.034 Device Self-Test: Not Supported 00:17:17.034 Directives: Not Supported 00:17:17.034 NVMe-MI: Not Supported 00:17:17.034 Virtualization Management: Not Supported 00:17:17.034 Doorbell Buffer Config: Not Supported 00:17:17.034 Get LBA Status Capability: Not Supported 00:17:17.034 Command & Feature Lockdown Capability: Not Supported 00:17:17.034 Abort Command Limit: 4 00:17:17.034 Async Event Request Limit: 4 00:17:17.034 Number of Firmware Slots: N/A 00:17:17.034 Firmware Slot 1 Read-Only: N/A 00:17:17.034 Firmware Activation Without Reset: N/A 00:17:17.034 Multiple Update Detection Support: N/A 00:17:17.034 Firmware Update Granularity: No Information Provided 00:17:17.034 Per-Namespace SMART Log: No 00:17:17.034 Asymmetric Namespace Access Log Page: Not Supported 00:17:17.034 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:17.034 Command Effects Log Page: Supported 00:17:17.034 Get Log Page Extended Data: Supported 00:17:17.034 Telemetry Log Pages: Not Supported 00:17:17.034 Persistent Event Log Pages: Not Supported 00:17:17.034 Supported Log Pages Log Page: May Support 00:17:17.034 Commands Supported & Effects Log Page: Not Supported 00:17:17.034 Feature Identifiers & Effects Log Page:May Support 00:17:17.034 NVMe-MI Commands & Effects Log Page: May Support 00:17:17.034 Data Area 4 for Telemetry Log: Not Supported 00:17:17.034 Error Log Page Entries Supported: 128 00:17:17.034 Keep Alive: Supported 00:17:17.034 Keep Alive Granularity: 10000 ms 00:17:17.034 00:17:17.034 NVM Command Set Attributes 00:17:17.034 ========================== 00:17:17.034 Submission Queue Entry Size 00:17:17.034 Max: 64 00:17:17.034 Min: 64 00:17:17.034 Completion Queue Entry Size 00:17:17.034 Max: 16 00:17:17.034 Min: 16 00:17:17.034 Number of Namespaces: 32 00:17:17.034 Compare Command: Supported 00:17:17.034 Write Uncorrectable Command: Not Supported 00:17:17.034 Dataset Management Command: Supported 00:17:17.034 Write Zeroes Command: Supported 00:17:17.034 Set Features Save Field: Not Supported 00:17:17.034 Reservations: Not Supported 00:17:17.034 Timestamp: Not Supported 00:17:17.034 Copy: Supported 00:17:17.034 Volatile Write Cache: Present 00:17:17.034 Atomic Write Unit (Normal): 1 00:17:17.034 Atomic Write Unit (PFail): 1 00:17:17.034 Atomic Compare & Write Unit: 1 00:17:17.034 Fused Compare & Write: Supported 00:17:17.034 Scatter-Gather List 00:17:17.034 SGL Command Set: Supported (Dword aligned) 00:17:17.034 SGL Keyed: Not Supported 00:17:17.034 SGL Bit Bucket Descriptor: Not Supported 00:17:17.034 SGL Metadata Pointer: Not Supported 00:17:17.034 Oversized SGL: Not Supported 00:17:17.034 SGL Metadata Address: Not Supported 00:17:17.034 SGL Offset: Not Supported 00:17:17.034 Transport SGL Data Block: Not Supported 00:17:17.034 Replay Protected Memory Block: Not Supported 00:17:17.034 00:17:17.034 Firmware Slot Information 00:17:17.034 ========================= 00:17:17.034 Active slot: 1 00:17:17.034 Slot 1 Firmware Revision: 24.09.1 00:17:17.034 00:17:17.034 00:17:17.034 Commands Supported and Effects 00:17:17.034 ============================== 00:17:17.034 Admin Commands 00:17:17.034 -------------- 00:17:17.034 Get Log Page (02h): Supported 00:17:17.034 Identify (06h): Supported 00:17:17.034 Abort (08h): Supported 00:17:17.034 Set Features (09h): Supported 00:17:17.034 Get Features (0Ah): Supported 00:17:17.034 Asynchronous Event Request (0Ch): Supported 00:17:17.034 Keep Alive (18h): Supported 00:17:17.034 I/O Commands 00:17:17.034 ------------ 00:17:17.034 Flush (00h): Supported LBA-Change 00:17:17.034 Write (01h): Supported LBA-Change 00:17:17.034 Read (02h): Supported 00:17:17.034 Compare (05h): Supported 00:17:17.034 Write Zeroes (08h): Supported LBA-Change 00:17:17.034 Dataset Management (09h): Supported LBA-Change 00:17:17.034 Copy (19h): Supported LBA-Change 00:17:17.034 00:17:17.034 Error Log 00:17:17.034 ========= 00:17:17.034 00:17:17.034 Arbitration 00:17:17.034 =========== 00:17:17.034 Arbitration Burst: 1 00:17:17.034 00:17:17.034 Power Management 00:17:17.034 ================ 00:17:17.034 Number of Power States: 1 00:17:17.034 Current Power State: Power State #0 00:17:17.034 Power State #0: 00:17:17.034 Max Power: 0.00 W 00:17:17.034 Non-Operational State: Operational 00:17:17.034 Entry Latency: Not Reported 00:17:17.034 Exit Latency: Not Reported 00:17:17.034 Relative Read Throughput: 0 00:17:17.034 Relative Read Latency: 0 00:17:17.034 Relative Write Throughput: 0 00:17:17.034 Relative Write Latency: 0 00:17:17.034 Idle Power: Not Reported 00:17:17.034 Active Power: Not Reported 00:17:17.034 Non-Operational Permissive Mode: Not Supported 00:17:17.034 00:17:17.034 Health Information 00:17:17.034 ================== 00:17:17.034 Critical Warnings: 00:17:17.034 Available Spare Space: OK 00:17:17.034 Temperature: OK 00:17:17.034 Device Reliability: OK 00:17:17.034 Read Only: No 00:17:17.034 Volatile Memory Backup: OK 00:17:17.034 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:17.034 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:17.034 Available Spare: 0% 00:17:17.034 Availabl[2024-11-20 17:37:58.692668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:17.034 [2024-11-20 17:37:58.692677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:17.034 [2024-11-20 17:37:58.692827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:17.034 [2024-11-20 17:37:58.692839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:17.034 [2024-11-20 17:37:58.692884] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:17.034 [2024-11-20 17:37:58.692897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.034 [2024-11-20 17:37:58.692905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.034 [2024-11-20 17:37:58.692912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.034 [2024-11-20 17:37:58.692919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.034 [2024-11-20 17:37:58.693585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:17.034 [2024-11-20 17:37:58.693627] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:17.034 [2024-11-20 17:37:58.694592] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:17.034 [2024-11-20 17:37:58.694696] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:17.034 [2024-11-20 17:37:58.694705] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:17.034 [2024-11-20 17:37:58.695590] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:17.035 [2024-11-20 17:37:58.695618] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:17.035 [2024-11-20 17:37:58.695877] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:17.035 [2024-11-20 17:37:58.701080] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:17.035 e Spare Threshold: 0% 00:17:17.035 Life Percentage Used: 0% 00:17:17.035 Data Units Read: 0 00:17:17.035 Data Units Written: 0 00:17:17.035 Host Read Commands: 0 00:17:17.035 Host Write Commands: 0 00:17:17.035 Controller Busy Time: 0 minutes 00:17:17.035 Power Cycles: 0 00:17:17.035 Power On Hours: 0 hours 00:17:17.035 Unsafe Shutdowns: 0 00:17:17.035 Unrecoverable Media Errors: 0 00:17:17.035 Lifetime Error Log Entries: 0 00:17:17.035 Warning Temperature Time: 0 minutes 00:17:17.035 Critical Temperature Time: 0 minutes 00:17:17.035 00:17:17.035 Number of Queues 00:17:17.035 ================ 00:17:17.035 Number of I/O Submission Queues: 127 00:17:17.035 Number of I/O Completion Queues: 127 00:17:17.035 00:17:17.035 Active Namespaces 00:17:17.035 ================= 00:17:17.035 Namespace ID:1 00:17:17.035 Error Recovery Timeout: Unlimited 00:17:17.035 Command Set Identifier: NVM (00h) 00:17:17.035 Deallocate: Supported 00:17:17.035 Deallocated/Unwritten Error: Not Supported 00:17:17.035 Deallocated Read Value: Unknown 00:17:17.035 Deallocate in Write Zeroes: Not Supported 00:17:17.035 Deallocated Guard Field: 0xFFFF 00:17:17.035 Flush: Supported 00:17:17.035 Reservation: Supported 00:17:17.035 Namespace Sharing Capabilities: Multiple Controllers 00:17:17.035 Size (in LBAs): 131072 (0GiB) 00:17:17.035 Capacity (in LBAs): 131072 (0GiB) 00:17:17.035 Utilization (in LBAs): 131072 (0GiB) 00:17:17.035 NGUID: 77C0C095BDAF41499ADE1DE2D5626C91 00:17:17.035 UUID: 77c0c095-bdaf-4149-9ade-1de2d5626c91 00:17:17.035 Thin Provisioning: Not Supported 00:17:17.035 Per-NS Atomic Units: Yes 00:17:17.035 Atomic Boundary Size (Normal): 0 00:17:17.035 Atomic Boundary Size (PFail): 0 00:17:17.035 Atomic Boundary Offset: 0 00:17:17.035 Maximum Single Source Range Length: 65535 00:17:17.035 Maximum Copy Length: 65535 00:17:17.035 Maximum Source Range Count: 1 00:17:17.035 NGUID/EUI64 Never Reused: No 00:17:17.035 Namespace Write Protected: No 00:17:17.035 Number of LBA Formats: 1 00:17:17.035 Current LBA Format: LBA Format #00 00:17:17.035 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:17.035 00:17:17.035 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:17.294 [2024-11-20 17:37:59.062383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:22.594 Initializing NVMe Controllers 00:17:22.594 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:22.594 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:22.594 Initialization complete. Launching workers. 00:17:22.594 ======================================================== 00:17:22.594 Latency(us) 00:17:22.594 Device Information : IOPS MiB/s Average min max 00:17:22.594 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30211.14 118.01 4236.36 1255.34 10708.48 00:17:22.594 ======================================================== 00:17:22.594 Total : 30211.14 118.01 4236.36 1255.34 10708.48 00:17:22.594 00:17:22.594 [2024-11-20 17:38:04.073808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:22.594 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:22.852 [2024-11-20 17:38:04.442319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:28.121 Initializing NVMe Controllers 00:17:28.121 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:28.121 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:28.121 Initialization complete. Launching workers. 00:17:28.121 ======================================================== 00:17:28.121 Latency(us) 00:17:28.121 Device Information : IOPS MiB/s Average min max 00:17:28.121 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15983.49 62.44 8013.41 4983.21 16024.53 00:17:28.121 ======================================================== 00:17:28.121 Total : 15983.49 62.44 8013.41 4983.21 16024.53 00:17:28.121 00:17:28.121 [2024-11-20 17:38:09.475343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:28.121 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:28.121 [2024-11-20 17:38:09.746039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:33.385 [2024-11-20 17:38:14.795463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:33.385 Initializing NVMe Controllers 00:17:33.385 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:33.385 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:33.385 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:33.385 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:33.385 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:33.385 Initialization complete. Launching workers. 00:17:33.385 Starting thread on core 2 00:17:33.385 Starting thread on core 1 00:17:33.385 Starting thread on core 3 00:17:33.385 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:33.385 [2024-11-20 17:38:15.125893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.663 [2024-11-20 17:38:18.188444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.663 Initializing NVMe Controllers 00:17:36.663 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.663 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.663 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:36.663 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:36.663 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:36.664 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:36.664 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:36.664 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:36.664 Initialization complete. Launching workers. 00:17:36.664 Starting thread on core 1 with urgent priority queue 00:17:36.664 Starting thread on core 2 with urgent priority queue 00:17:36.664 Starting thread on core 3 with urgent priority queue 00:17:36.664 Starting thread on core 0 with urgent priority queue 00:17:36.664 SPDK bdev Controller (SPDK1 ) core 0: 3282.67 IO/s 30.46 secs/100000 ios 00:17:36.664 SPDK bdev Controller (SPDK1 ) core 1: 3257.67 IO/s 30.70 secs/100000 ios 00:17:36.664 SPDK bdev Controller (SPDK1 ) core 2: 3514.33 IO/s 28.45 secs/100000 ios 00:17:36.664 SPDK bdev Controller (SPDK1 ) core 3: 3688.33 IO/s 27.11 secs/100000 ios 00:17:36.664 ======================================================== 00:17:36.664 00:17:36.664 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:36.922 [2024-11-20 17:38:18.537703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.922 Initializing NVMe Controllers 00:17:36.922 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.922 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.922 Namespace ID: 1 size: 0GB 00:17:36.922 Initialization complete. 00:17:36.922 INFO: using host memory buffer for IO 00:17:36.922 Hello world! 00:17:36.922 [2024-11-20 17:38:18.573552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.922 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:37.180 [2024-11-20 17:38:18.914230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:38.555 Initializing NVMe Controllers 00:17:38.555 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:38.555 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:38.555 Initialization complete. Launching workers. 00:17:38.555 submit (in ns) avg, min, max = 6936.7, 4070.0, 4026864.5 00:17:38.555 complete (in ns) avg, min, max = 27115.4, 2245.5, 4038054.1 00:17:38.555 00:17:38.555 Submit histogram 00:17:38.555 ================ 00:17:38.555 Range in us Cumulative Count 00:17:38.555 4.044 - 4.073: 0.0076% ( 1) 00:17:38.555 4.073 - 4.102: 0.0457% ( 5) 00:17:38.555 4.102 - 4.131: 0.1905% ( 19) 00:17:38.555 4.131 - 4.160: 0.8841% ( 91) 00:17:38.555 4.160 - 4.189: 3.5899% ( 355) 00:17:38.555 4.189 - 4.218: 9.7256% ( 805) 00:17:38.555 4.218 - 4.247: 19.1159% ( 1232) 00:17:38.555 4.247 - 4.276: 29.6799% ( 1386) 00:17:38.555 4.276 - 4.305: 40.5335% ( 1424) 00:17:38.555 4.305 - 4.335: 51.6311% ( 1456) 00:17:38.555 4.335 - 4.364: 60.9832% ( 1227) 00:17:38.555 4.364 - 4.393: 68.6966% ( 1012) 00:17:38.555 4.393 - 4.422: 74.5884% ( 773) 00:17:38.555 4.422 - 4.451: 78.6585% ( 534) 00:17:38.555 4.451 - 4.480: 81.1966% ( 333) 00:17:38.555 4.480 - 4.509: 83.3994% ( 289) 00:17:38.555 4.509 - 4.538: 84.9162% ( 199) 00:17:38.555 4.538 - 4.567: 86.7149% ( 236) 00:17:38.555 4.567 - 4.596: 88.2470% ( 201) 00:17:38.555 4.596 - 4.625: 89.8018% ( 204) 00:17:38.555 4.625 - 4.655: 91.0595% ( 165) 00:17:38.555 4.655 - 4.684: 92.2866% ( 161) 00:17:38.555 4.684 - 4.713: 93.3613% ( 141) 00:17:38.555 4.713 - 4.742: 94.0777% ( 94) 00:17:38.555 4.742 - 4.771: 94.7180% ( 84) 00:17:38.555 4.771 - 4.800: 95.1067% ( 51) 00:17:38.555 4.800 - 4.829: 95.3887% ( 37) 00:17:38.555 4.829 - 4.858: 95.7241% ( 44) 00:17:38.555 4.858 - 4.887: 95.9451% ( 29) 00:17:38.555 4.887 - 4.916: 96.1128% ( 22) 00:17:38.555 4.916 - 4.945: 96.2195% ( 14) 00:17:38.555 4.945 - 4.975: 96.3338% ( 15) 00:17:38.555 4.975 - 5.004: 96.4482% ( 15) 00:17:38.555 5.004 - 5.033: 96.5091% ( 8) 00:17:38.555 5.033 - 5.062: 96.6540% ( 19) 00:17:38.555 5.062 - 5.091: 96.7302% ( 10) 00:17:38.555 5.091 - 5.120: 96.8140% ( 11) 00:17:38.555 5.120 - 5.149: 96.8750% ( 8) 00:17:38.555 5.149 - 5.178: 96.9207% ( 6) 00:17:38.555 5.178 - 5.207: 96.9588% ( 5) 00:17:38.555 5.207 - 5.236: 97.0808% ( 16) 00:17:38.555 5.236 - 5.265: 97.1494% ( 9) 00:17:38.555 5.265 - 5.295: 97.2104% ( 8) 00:17:38.555 5.295 - 5.324: 97.3018% ( 12) 00:17:38.555 5.324 - 5.353: 97.3247% ( 3) 00:17:38.555 5.353 - 5.382: 97.3628% ( 5) 00:17:38.555 5.382 - 5.411: 97.3933% ( 4) 00:17:38.555 5.411 - 5.440: 97.4314% ( 5) 00:17:38.555 5.440 - 5.469: 97.4543% ( 3) 00:17:38.555 5.469 - 5.498: 97.4848% ( 4) 00:17:38.555 5.498 - 5.527: 97.5152% ( 4) 00:17:38.555 5.527 - 5.556: 97.5610% ( 6) 00:17:38.555 5.556 - 5.585: 97.5991% ( 5) 00:17:38.555 5.585 - 5.615: 97.6372% ( 5) 00:17:38.555 5.615 - 5.644: 97.6524% ( 2) 00:17:38.555 5.644 - 5.673: 97.7058% ( 7) 00:17:38.555 5.673 - 5.702: 97.7439% ( 5) 00:17:38.555 5.702 - 5.731: 97.7744% ( 4) 00:17:38.555 5.731 - 5.760: 97.8125% ( 5) 00:17:38.555 5.760 - 5.789: 97.8354% ( 3) 00:17:38.555 5.789 - 5.818: 97.8659% ( 4) 00:17:38.555 5.818 - 5.847: 97.9116% ( 6) 00:17:38.555 5.847 - 5.876: 97.9573% ( 6) 00:17:38.555 5.876 - 5.905: 97.9878% ( 4) 00:17:38.555 5.905 - 5.935: 98.0259% ( 5) 00:17:38.555 5.935 - 5.964: 98.0488% ( 3) 00:17:38.555 5.964 - 5.993: 98.0716% ( 3) 00:17:38.555 5.993 - 6.022: 98.1098% ( 5) 00:17:38.555 6.022 - 6.051: 98.1174% ( 1) 00:17:38.555 6.051 - 6.080: 98.1326% ( 2) 00:17:38.555 6.080 - 6.109: 98.1555% ( 3) 00:17:38.555 6.109 - 6.138: 98.1784% ( 3) 00:17:38.555 6.138 - 6.167: 98.2088% ( 4) 00:17:38.555 6.167 - 6.196: 98.2470% ( 5) 00:17:38.555 6.196 - 6.225: 98.2622% ( 2) 00:17:38.555 6.225 - 6.255: 98.2698% ( 1) 00:17:38.555 6.255 - 6.284: 98.2851% ( 2) 00:17:38.555 6.284 - 6.313: 98.2927% ( 1) 00:17:38.555 6.313 - 6.342: 98.3003% ( 1) 00:17:38.555 6.342 - 6.371: 98.3079% ( 1) 00:17:38.555 6.371 - 6.400: 98.3460% ( 5) 00:17:38.555 6.400 - 6.429: 98.3613% ( 2) 00:17:38.555 6.429 - 6.458: 98.3689% ( 1) 00:17:38.555 6.575 - 6.604: 98.3841% ( 2) 00:17:38.555 6.604 - 6.633: 98.4146% ( 4) 00:17:38.555 6.633 - 6.662: 98.4375% ( 3) 00:17:38.555 6.662 - 6.691: 98.4451% ( 1) 00:17:38.555 6.691 - 6.720: 98.4527% ( 1) 00:17:38.555 6.720 - 6.749: 98.4604% ( 1) 00:17:38.555 6.749 - 6.778: 98.4756% ( 2) 00:17:38.555 6.778 - 6.807: 98.4985% ( 3) 00:17:38.555 6.807 - 6.836: 98.5061% ( 1) 00:17:38.555 6.865 - 6.895: 98.5213% ( 2) 00:17:38.555 6.895 - 6.924: 98.5518% ( 4) 00:17:38.555 6.924 - 6.953: 98.5595% ( 1) 00:17:38.555 6.982 - 7.011: 98.5671% ( 1) 00:17:38.555 7.011 - 7.040: 98.5747% ( 1) 00:17:38.555 7.069 - 7.098: 98.5899% ( 2) 00:17:38.555 7.156 - 7.185: 98.5976% ( 1) 00:17:38.555 7.185 - 7.215: 98.6128% ( 2) 00:17:38.555 7.273 - 7.302: 98.6280% ( 2) 00:17:38.555 7.331 - 7.360: 98.6357% ( 1) 00:17:38.555 7.360 - 7.389: 98.6433% ( 1) 00:17:38.555 7.389 - 7.418: 98.6585% ( 2) 00:17:38.555 7.447 - 7.505: 98.6662% ( 1) 00:17:38.555 7.505 - 7.564: 98.6890% ( 3) 00:17:38.555 7.564 - 7.622: 98.7043% ( 2) 00:17:38.555 7.622 - 7.680: 98.7271% ( 3) 00:17:38.555 7.680 - 7.738: 98.7652% ( 5) 00:17:38.555 7.738 - 7.796: 98.7729% ( 1) 00:17:38.555 7.796 - 7.855: 98.7805% ( 1) 00:17:38.555 7.855 - 7.913: 98.7881% ( 1) 00:17:38.555 7.913 - 7.971: 98.8186% ( 4) 00:17:38.555 7.971 - 8.029: 98.8415% ( 3) 00:17:38.555 8.029 - 8.087: 98.8491% ( 1) 00:17:38.555 8.087 - 8.145: 98.8567% ( 1) 00:17:38.555 8.145 - 8.204: 98.8720% ( 2) 00:17:38.555 8.204 - 8.262: 98.8872% ( 2) 00:17:38.556 8.320 - 8.378: 98.9024% ( 2) 00:17:38.556 8.495 - 8.553: 98.9253% ( 3) 00:17:38.556 8.553 - 8.611: 98.9405% ( 2) 00:17:38.556 8.611 - 8.669: 98.9634% ( 3) 00:17:38.556 8.669 - 8.727: 98.9863% ( 3) 00:17:38.556 8.727 - 8.785: 99.0015% ( 2) 00:17:38.556 8.785 - 8.844: 99.0168% ( 2) 00:17:38.556 8.844 - 8.902: 99.0396% ( 3) 00:17:38.556 8.960 - 9.018: 99.0473% ( 1) 00:17:38.556 9.018 - 9.076: 99.0549% ( 1) 00:17:38.556 9.076 - 9.135: 99.0625% ( 1) 00:17:38.556 9.193 - 9.251: 99.0854% ( 3) 00:17:38.556 9.251 - 9.309: 99.1082% ( 3) 00:17:38.556 9.309 - 9.367: 99.1159% ( 1) 00:17:38.556 9.484 - 9.542: 99.1235% ( 1) 00:17:38.556 9.542 - 9.600: 99.1311% ( 1) 00:17:38.556 9.600 - 9.658: 99.1540% ( 3) 00:17:38.556 9.775 - 9.833: 99.1616% ( 1) 00:17:38.556 10.007 - 10.065: 99.1692% ( 1) 00:17:38.556 10.065 - 10.124: 99.1768% ( 1) 00:17:38.556 10.124 - 10.182: 99.1921% ( 2) 00:17:38.556 10.182 - 10.240: 99.1997% ( 1) 00:17:38.556 10.240 - 10.298: 99.2073% ( 1) 00:17:38.556 10.415 - 10.473: 99.2226% ( 2) 00:17:38.556 10.473 - 10.531: 99.2302% ( 1) 00:17:38.556 10.531 - 10.589: 99.2378% ( 1) 00:17:38.556 10.938 - 10.996: 99.2454% ( 1) 00:17:38.556 11.055 - 11.113: 99.2683% ( 3) 00:17:38.556 11.113 - 11.171: 99.2835% ( 2) 00:17:38.556 11.287 - 11.345: 99.2912% ( 1) 00:17:38.556 11.462 - 11.520: 99.3064% ( 2) 00:17:38.556 11.520 - 11.578: 99.3140% ( 1) 00:17:38.556 11.636 - 11.695: 99.3216% ( 1) 00:17:38.556 11.695 - 11.753: 99.3293% ( 1) 00:17:38.556 11.753 - 11.811: 99.3369% ( 1) 00:17:38.556 11.811 - 11.869: 99.3521% ( 2) 00:17:38.556 11.985 - 12.044: 99.3598% ( 1) 00:17:38.556 12.044 - 12.102: 99.3674% ( 1) 00:17:38.556 12.102 - 12.160: 99.3826% ( 2) 00:17:38.556 12.160 - 12.218: 99.3979% ( 2) 00:17:38.556 12.218 - 12.276: 99.4131% ( 2) 00:17:38.556 12.276 - 12.335: 99.4284% ( 2) 00:17:38.556 12.335 - 12.393: 99.4512% ( 3) 00:17:38.556 12.509 - 12.567: 99.4588% ( 1) 00:17:38.556 12.567 - 12.625: 99.4665% ( 1) 00:17:38.556 12.625 - 12.684: 99.4741% ( 1) 00:17:38.556 12.684 - 12.742: 99.4893% ( 2) 00:17:38.556 12.742 - 12.800: 99.5046% ( 2) 00:17:38.556 12.800 - 12.858: 99.5122% ( 1) 00:17:38.556 12.916 - 12.975: 99.5351% ( 3) 00:17:38.556 12.975 - 13.033: 99.5579% ( 3) 00:17:38.556 13.033 - 13.091: 99.5732% ( 2) 00:17:38.556 13.149 - 13.207: 99.5808% ( 1) 00:17:38.556 13.207 - 13.265: 99.5884% ( 1) 00:17:38.556 13.265 - 13.324: 99.5960% ( 1) 00:17:38.556 13.324 - 13.382: 99.6037% ( 1) 00:17:38.556 13.382 - 13.440: 99.6265% ( 3) 00:17:38.556 13.440 - 13.498: 99.6341% ( 1) 00:17:38.556 13.498 - 13.556: 99.6418% ( 1) 00:17:38.556 13.556 - 13.615: 99.6494% ( 1) 00:17:38.556 13.789 - 13.847: 99.6570% ( 1) 00:17:38.556 13.847 - 13.905: 99.6646% ( 1) 00:17:38.556 13.964 - 14.022: 99.6723% ( 1) 00:17:38.556 14.080 - 14.138: 99.6799% ( 1) 00:17:38.556 14.255 - 14.313: 99.6875% ( 1) 00:17:38.556 14.313 - 14.371: 99.6951% ( 1) 00:17:38.556 14.429 - 14.487: 99.7027% ( 1) 00:17:38.556 14.545 - 14.604: 99.7104% ( 1) 00:17:38.556 14.720 - 14.778: 99.7180% ( 1) 00:17:38.556 14.836 - 14.895: 99.7256% ( 1) 00:17:38.556 14.895 - 15.011: 99.7637% ( 5) 00:17:38.556 15.011 - 15.127: 99.7866% ( 3) 00:17:38.556 15.244 - 15.360: 99.7942% ( 1) 00:17:38.556 15.593 - 15.709: 99.8171% ( 3) 00:17:38.556 15.709 - 15.825: 99.8247% ( 1) 00:17:38.556 15.825 - 15.942: 99.8323% ( 1) 00:17:38.556 15.942 - 16.058: 99.8476% ( 2) 00:17:38.556 16.058 - 16.175: 99.8552% ( 1) 00:17:38.556 16.524 - 16.640: 99.8628% ( 1) 00:17:38.556 16.640 - 16.756: 99.8704% ( 1) 00:17:38.556 16.873 - 16.989: 99.8780% ( 1) 00:17:38.556 17.338 - 17.455: 99.8857% ( 1) 00:17:38.556 17.804 - 17.920: 99.8933% ( 1) 00:17:38.556 19.084 - 19.200: 99.9009% ( 1) 00:17:38.556 20.945 - 21.062: 99.9085% ( 1) 00:17:38.556 30.720 - 30.953: 99.9162% ( 1) 00:17:38.556 36.305 - 36.538: 99.9238% ( 1) 00:17:38.556 37.469 - 37.702: 99.9314% ( 1) 00:17:38.556 71.680 - 72.145: 99.9390% ( 1) 00:17:38.556 3991.738 - 4021.527: 99.9924% ( 7) 00:17:38.556 4021.527 - 4051.316: 100.0000% ( 1) 00:17:38.556 00:17:38.556 Complete histogram 00:17:38.556 ================== 00:17:38.556 Range in us Cumulative Count 00:17:38.556 2.240 - 2.255: 0.6326% ( 83) 00:17:38.556 2.255 - 2.269: 43.0259% ( 5562) 00:17:38.556 2.269 - 2.284: 85.7165% ( 5601) 00:17:38.556 2.284 - 2.298: 87.8735% ( 283) 00:17:38.556 2.298 - 2.313: 88.3918% ( 68) 00:17:38.556 2.313 - 2.327: 90.7470% ( 309) 00:17:38.556 2.327 - 2.342: 93.4451% ( 354) 00:17:38.556 2.342 - 2.356: 95.0762% ( 214) 00:17:38.556 2.356 - 2.371: 95.8918% ( 107) 00:17:38.556 2.371 - 2.385: 96.3491% ( 60) 00:17:38.556 2.385 - 2.400: 96.6692% ( 42) 00:17:38.556 2.400 - 2.415: 96.9512% ( 37) 00:17:38.556 2.415 - 2.429: 97.1723% ( 29) 00:17:38.556 2.429 - 2.444: 97.3857% ( 28) 00:17:38.556 2.444 - 2.458: 97.4543% ( 9) 00:17:38.556 2.458 - 2.473: 97.5229% ( 9) 00:17:38.556 2.473 - 2.487: 97.5915% ( 9) 00:17:38.556 2.487 - 2.502: 97.6067% ( 2) 00:17:38.556 2.502 - 2.516: 97.6220% ( 2) 00:17:38.556 2.516 - 2.531: 97.6524% ( 4) 00:17:38.556 2.531 - 2.545: 97.6982% ( 6) 00:17:38.556 2.545 - 2.560: 97.7363% ( 5) 00:17:38.556 2.560 - 2.575: 97.7744% ( 5) 00:17:38.556 2.575 - 2.589: 97.8125% ( 5) 00:17:38.556 2.589 - 2.604: 97.8430% ( 4) 00:17:38.556 2.604 - 2.618: 97.8735% ( 4) 00:17:38.556 2.618 - 2.633: 97.9192% ( 6) 00:17:38.556 2.633 - 2.647: 97.9345% ( 2) 00:17:38.556 2.647 - 2.662: 97.9878% ( 7) 00:17:38.556 2.662 - 2.676: 98.0335% ( 6) 00:17:38.556 2.676 - 2.691: 98.0564% ( 3) 00:17:38.556 2.691 - 2.705: 98.1098% ( 7) 00:17:38.556 2.705 - 2.720: 98.1479% ( 5) 00:17:38.556 2.720 - 2.735: 98.1631% ( 2) 00:17:38.556 2.735 - 2.749: 98.1784% ( 2) 00:17:38.556 2.749 - 2.764: 98.1860% ( 1) 00:17:38.556 2.778 - 2.793: 98.2165% ( 4) 00:17:38.556 2.793 - 2.807: 98.2241% ( 1) 00:17:38.556 2.807 - 2.822: 98.2470% ( 3) 00:17:38.556 2.851 - 2.865: 98.2546% ( 1) 00:17:38.556 2.865 - 2.880: 98.2698% ( 2) 00:17:38.556 2.880 - 2.895: 98.2927% ( 3) 00:17:38.556 2.895 - 2.909: 98.3155% ( 3) 00:17:38.556 2.909 - 2.924: 98.3460% ( 4) 00:17:38.556 2.924 - 2.938: 98.3537% ( 1) 00:17:38.556 2.953 - 2.967: 98.3765% ( 3) 00:17:38.556 2.967 - 2.982: 98.3994% ( 3) 00:17:38.556 2.982 - 2.996: 98.4146% ( 2) 00:17:38.556 2.996 - 3.011: 98.4375% ( 3) 00:17:38.556 3.011 - 3.025: 98.4680% ( 4) 00:17:38.556 3.025 - 3.040: 98.4832% ( 2) 00:17:38.556 3.055 - 3.069: 98.4909% ( 1) 00:17:38.556 3.069 - 3.084: 98.5213% ( 4) 00:17:38.556 3.084 - 3.098: 98.5442% ( 3) 00:17:38.556 3.098 - 3.113: 98.5595% ( 2) 00:17:38.556 3.113 - 3.127: 98.5747% ( 2) 00:17:38.556 3.127 - 3.142: 98.5823% ( 1) 00:17:38.556 3.142 - 3.156: 98.5899% ( 1) 00:17:38.556 3.156 - 3.171: 98.5976% ( 1) 00:17:38.556 3.185 - 3.200: 98.6052% ( 1) 00:17:38.556 3.215 - 3.229: 98.6280% ( 3) 00:17:38.556 3.229 - 3.244: 98.6357% ( 1) 00:17:38.556 3.258 - 3.273: 98.6433% ( 1) 00:17:38.556 3.273 - 3.287: 98.6585% ( 2) 00:17:38.556 3.287 - 3.302: 98.6662% ( 1) 00:17:38.556 3.302 - 3.316: 98.6738% ( 1) 00:17:38.556 3.331 - 3.345: 98.6814% ( 1) 00:17:38.556 3.360 - 3.375: 98.6890% ( 1) 00:17:38.556 3.404 - 3.418: 98.6966% ( 1) 00:17:38.556 3.447 - 3.462: 98.7043% ( 1) 00:17:38.556 3.520 - 3.535: 98.7119% ( 1) 00:17:38.556 3.549 - 3.564: 98.7195% ( 1) 00:17:38.556 3.622 - 3.636: 98.7271% ( 1) 00:17:38.556 3.636 - 3.651: 98.7348% ( 1) 00:17:38.556 3.680 - 3.695: 98.7500% ( 2) 00:17:38.556 3.695 - 3.709: 98.7576% ( 1) 00:17:38.556 3.782 - 3.811: 98.7652% ( 1) 00:17:38.556 3.811 - 3.840: 98.7729% ( 1) 00:17:38.556 3.898 - 3.927: 98.7805% ( 1) 00:17:38.556 3.985 - 4.015: 98.7881% ( 1) 00:17:38.556 4.044 - 4.073: 98.7957% ( 1) 00:17:38.556 4.073 - 4.102: 98.8034% ( 1) 00:17:38.556 4.189 - 4.218: 98.8110% ( 1) 00:17:38.556 4.305 - 4.335: 98.8186% ( 1) 00:17:38.556 4.422 - 4.451: 98.8262% ( 1) 00:17:38.556 4.480 - 4.509: 98.8415% ( 2) 00:17:38.556 4.538 - 4.567: 98.8491% ( 1) 00:17:38.556 4.567 - 4.596: 98.8567% ( 1) 00:17:38.556 4.858 - 4.887: 98.8643% ( 1) 00:17:38.556 4.945 - 4.975: 98.8796% ( 2) 00:17:38.556 5.062 - 5.091: 98.8872% ( 1) 00:17:38.556 5.207 - 5.236: 98.8948% ( 1) 00:17:38.556 5.440 - 5.469: 98.9024% ( 1) 00:17:38.557 5.469 - 5.498: 98.9177% ( 2) 00:17:38.557 5.760 - 5.789: 98.9253% ( 1) 00:17:38.557 5.993 - 6.022: 98.9482% ( 3) 00:17:38.557 6.022 - 6.051: 98.9558% ( 1) 00:17:38.557 6.138 - 6.167: 98.9710% ( 2) 00:17:38.557 6.284 - 6.313: 98.9787% ( 1) 00:17:38.557 6.342 - 6.371: 98.9863% ( 1) 00:17:38.557 6.371 - 6.400: 98.9939% ( 1) 00:17:38.557 6.720 - 6.749: 99.0015% ( 1) 00:17:38.557 6.836 - 6.865: 99.0091% ( 1) 00:17:38.557 7.069 - 7.098: 99.0168% ( 1) 00:17:38.557 7.360 - 7.389: 99.0244% ( 1) 00:17:38.557 7.447 - 7.505: 99.0396% ( 2) 00:17:38.557 7.738 - 7.796: 99.0473% ( 1) 00:17:38.557 7.855 - 7.913: 99.0549% ( 1) 00:17:38.557 8.262 - 8.320: 99.0625% ( 1) 00:17:38.557 8.495 - 8.553: 99.0701% ( 1) 00:17:38.557 8.553 - 8.611: 99.0854% ( 2) 00:17:38.557 8.611 - 8.669: 99.0930% ( 1) 00:17:38.557 8.669 - 8.727: 99.1082% ( 2) 00:17:38.557 9.251 - 9.309: 99.1159% ( 1) 00:17:38.557 9.891 - 9.949: 99.1235% ( 1) 00:17:38.557 10.007 - 10.065: 99.1311% ( 1) 00:17:38.557 10.065 - 10.124: 99.1387% ( 1) 00:17:38.557 10.298 - 10.356: 99.1463% ( 1) 00:17:38.557 10.356 - 10.415: 99.1540% ( 1) 00:17:38.557 10.647 - 10.705: 99.1616% ( 1) 00:17:38.557 10.822 - 10.880: 99.1692% ( 1) 00:17:38.557 10.996 - 11.055: 99.1768% ( 1) 00:17:38.557 11.055 - 11.113: 99.1845% ( 1) 00:17:38.557 11.171 - 11.229: 99.1997% ( 2) 00:17:38.557 11.404 - 11.462: 99.2149% ( 2) 00:17:38.557 11.578 - 11.636: 99.2226% ( 1) 00:17:38.557 11.811 - 11.869: 99.2378% ( 2) 00:17:38.557 11.869 - 11.927: 99.2454% ( 1) 00:17:38.557 11.985 - 12.044: 99.2607% ( 2) 00:17:38.557 12.567 - 12.625: 99.2683%[2024-11-20 17:38:19.934617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:38.557 ( 1) 00:17:38.557 12.742 - 12.800: 99.2759% ( 1) 00:17:38.557 12.800 - 12.858: 99.2835% ( 1) 00:17:38.557 13.091 - 13.149: 99.2912% ( 1) 00:17:38.557 13.149 - 13.207: 99.2988% ( 1) 00:17:38.557 13.382 - 13.440: 99.3064% ( 1) 00:17:38.557 13.905 - 13.964: 99.3140% ( 1) 00:17:38.557 14.022 - 14.080: 99.3216% ( 1) 00:17:38.557 14.080 - 14.138: 99.3293% ( 1) 00:17:38.557 14.313 - 14.371: 99.3369% ( 1) 00:17:38.557 14.429 - 14.487: 99.3445% ( 1) 00:17:38.557 16.407 - 16.524: 99.3521% ( 1) 00:17:38.557 16.989 - 17.105: 99.3598% ( 1) 00:17:38.557 20.713 - 20.829: 99.3674% ( 1) 00:17:38.557 22.575 - 22.691: 99.3750% ( 1) 00:17:38.557 36.538 - 36.771: 99.3826% ( 1) 00:17:38.557 3991.738 - 4021.527: 99.9314% ( 72) 00:17:38.557 4021.527 - 4051.316: 100.0000% ( 9) 00:17:38.557 00:17:38.557 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:38.557 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:38.557 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:38.557 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:38.557 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:38.557 [ 00:17:38.557 { 00:17:38.557 "allow_any_host": true, 00:17:38.557 "hosts": [], 00:17:38.557 "listen_addresses": [], 00:17:38.557 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:38.557 "subtype": "Discovery" 00:17:38.557 }, 00:17:38.557 { 00:17:38.557 "allow_any_host": true, 00:17:38.557 "hosts": [], 00:17:38.557 "listen_addresses": [ 00:17:38.557 { 00:17:38.557 "adrfam": "IPv4", 00:17:38.557 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:38.557 "trsvcid": "0", 00:17:38.557 "trtype": "VFIOUSER" 00:17:38.557 } 00:17:38.557 ], 00:17:38.557 "max_cntlid": 65519, 00:17:38.557 "max_namespaces": 32, 00:17:38.557 "min_cntlid": 1, 00:17:38.557 "model_number": "SPDK bdev Controller", 00:17:38.557 "namespaces": [ 00:17:38.557 { 00:17:38.557 "bdev_name": "Malloc1", 00:17:38.557 "name": "Malloc1", 00:17:38.557 "nguid": "77C0C095BDAF41499ADE1DE2D5626C91", 00:17:38.557 "nsid": 1, 00:17:38.557 "uuid": "77c0c095-bdaf-4149-9ade-1de2d5626c91" 00:17:38.557 } 00:17:38.557 ], 00:17:38.557 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:38.557 "serial_number": "SPDK1", 00:17:38.557 "subtype": "NVMe" 00:17:38.557 }, 00:17:38.557 { 00:17:38.557 "allow_any_host": true, 00:17:38.557 "hosts": [], 00:17:38.557 "listen_addresses": [ 00:17:38.557 { 00:17:38.557 "adrfam": "IPv4", 00:17:38.557 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:38.557 "trsvcid": "0", 00:17:38.557 "trtype": "VFIOUSER" 00:17:38.557 } 00:17:38.557 ], 00:17:38.557 "max_cntlid": 65519, 00:17:38.557 "max_namespaces": 32, 00:17:38.557 "min_cntlid": 1, 00:17:38.557 "model_number": "SPDK bdev Controller", 00:17:38.557 "namespaces": [ 00:17:38.557 { 00:17:38.557 "bdev_name": "Malloc2", 00:17:38.557 "name": "Malloc2", 00:17:38.557 "nguid": "C5D629493B944BF088EC0F2A986FDAFA", 00:17:38.557 "nsid": 1, 00:17:38.557 "uuid": "c5d62949-3b94-4bf0-88ec-0f2a986fdafa" 00:17:38.557 } 00:17:38.557 ], 00:17:38.557 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:38.557 "serial_number": "SPDK2", 00:17:38.557 "subtype": "NVMe" 00:17:38.557 } 00:17:38.557 ] 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=93149 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:38.816 [2024-11-20 17:38:20.560315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:38.816 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:39.383 Malloc3 00:17:39.383 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:39.642 [2024-11-20 17:38:21.226386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:39.642 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:39.642 Asynchronous Event Request test 00:17:39.642 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:39.642 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:39.642 Registering asynchronous event callbacks... 00:17:39.642 Starting namespace attribute notice tests for all controllers... 00:17:39.642 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:39.642 aer_cb - Changed Namespace 00:17:39.642 Cleaning up... 00:17:39.901 [ 00:17:39.901 { 00:17:39.901 "allow_any_host": true, 00:17:39.901 "hosts": [], 00:17:39.901 "listen_addresses": [], 00:17:39.901 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:39.901 "subtype": "Discovery" 00:17:39.901 }, 00:17:39.901 { 00:17:39.901 "allow_any_host": true, 00:17:39.901 "hosts": [], 00:17:39.901 "listen_addresses": [ 00:17:39.901 { 00:17:39.901 "adrfam": "IPv4", 00:17:39.901 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:39.901 "trsvcid": "0", 00:17:39.901 "trtype": "VFIOUSER" 00:17:39.901 } 00:17:39.901 ], 00:17:39.901 "max_cntlid": 65519, 00:17:39.901 "max_namespaces": 32, 00:17:39.901 "min_cntlid": 1, 00:17:39.901 "model_number": "SPDK bdev Controller", 00:17:39.901 "namespaces": [ 00:17:39.901 { 00:17:39.901 "bdev_name": "Malloc1", 00:17:39.901 "name": "Malloc1", 00:17:39.901 "nguid": "77C0C095BDAF41499ADE1DE2D5626C91", 00:17:39.901 "nsid": 1, 00:17:39.901 "uuid": "77c0c095-bdaf-4149-9ade-1de2d5626c91" 00:17:39.901 }, 00:17:39.901 { 00:17:39.901 "bdev_name": "Malloc3", 00:17:39.901 "name": "Malloc3", 00:17:39.901 "nguid": "05BB187D1A884466A1F88684DEFF5A53", 00:17:39.901 "nsid": 2, 00:17:39.901 "uuid": "05bb187d-1a88-4466-a1f8-8684deff5a53" 00:17:39.901 } 00:17:39.901 ], 00:17:39.901 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:39.901 "serial_number": "SPDK1", 00:17:39.901 "subtype": "NVMe" 00:17:39.901 }, 00:17:39.901 { 00:17:39.901 "allow_any_host": true, 00:17:39.901 "hosts": [], 00:17:39.901 "listen_addresses": [ 00:17:39.901 { 00:17:39.901 "adrfam": "IPv4", 00:17:39.901 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:39.901 "trsvcid": "0", 00:17:39.901 "trtype": "VFIOUSER" 00:17:39.901 } 00:17:39.901 ], 00:17:39.901 "max_cntlid": 65519, 00:17:39.901 "max_namespaces": 32, 00:17:39.901 "min_cntlid": 1, 00:17:39.901 "model_number": "SPDK bdev Controller", 00:17:39.901 "namespaces": [ 00:17:39.901 { 00:17:39.901 "bdev_name": "Malloc2", 00:17:39.901 "name": "Malloc2", 00:17:39.901 "nguid": "C5D629493B944BF088EC0F2A986FDAFA", 00:17:39.901 "nsid": 1, 00:17:39.901 "uuid": "c5d62949-3b94-4bf0-88ec-0f2a986fdafa" 00:17:39.901 } 00:17:39.901 ], 00:17:39.901 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:39.901 "serial_number": "SPDK2", 00:17:39.901 "subtype": "NVMe" 00:17:39.901 } 00:17:39.901 ] 00:17:39.901 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 93149 00:17:39.901 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:39.901 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:39.901 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:39.901 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:39.901 [2024-11-20 17:38:21.609336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:39.901 [2024-11-20 17:38:21.609426] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93187 ] 00:17:40.161 [2024-11-20 17:38:21.751854] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:40.161 [2024-11-20 17:38:21.760714] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:40.161 [2024-11-20 17:38:21.760801] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa20405c000 00:17:40.161 [2024-11-20 17:38:21.761700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.161 [2024-11-20 17:38:21.762701] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.161 [2024-11-20 17:38:21.763707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.161 [2024-11-20 17:38:21.764724] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:40.161 [2024-11-20 17:38:21.766077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:40.161 [2024-11-20 17:38:21.766726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.162 [2024-11-20 17:38:21.767733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:40.162 [2024-11-20 17:38:21.768737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.162 [2024-11-20 17:38:21.769753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:40.162 [2024-11-20 17:38:21.769792] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa203493000 00:17:40.162 [2024-11-20 17:38:21.771083] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:40.162 [2024-11-20 17:38:21.791471] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:40.162 [2024-11-20 17:38:21.791566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:40.162 [2024-11-20 17:38:21.793710] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:40.162 [2024-11-20 17:38:21.793794] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:40.162 [2024-11-20 17:38:21.793926] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:40.162 [2024-11-20 17:38:21.793970] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:40.162 [2024-11-20 17:38:21.793977] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:40.162 [2024-11-20 17:38:21.794682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:40.162 [2024-11-20 17:38:21.794709] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:40.162 [2024-11-20 17:38:21.794721] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:40.162 [2024-11-20 17:38:21.795687] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:40.162 [2024-11-20 17:38:21.795715] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:40.162 [2024-11-20 17:38:21.795728] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:40.162 [2024-11-20 17:38:21.796682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:40.162 [2024-11-20 17:38:21.796707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:40.162 [2024-11-20 17:38:21.797685] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:40.162 [2024-11-20 17:38:21.797709] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:40.162 [2024-11-20 17:38:21.797717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:40.162 [2024-11-20 17:38:21.797727] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:40.162 [2024-11-20 17:38:21.797833] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:40.162 [2024-11-20 17:38:21.797839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:40.162 [2024-11-20 17:38:21.797845] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:40.162 [2024-11-20 17:38:21.798693] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:40.162 [2024-11-20 17:38:21.799699] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:40.162 [2024-11-20 17:38:21.800699] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:40.162 [2024-11-20 17:38:21.801697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:40.162 [2024-11-20 17:38:21.801815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:40.162 [2024-11-20 17:38:21.802705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:40.162 [2024-11-20 17:38:21.802730] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:40.162 [2024-11-20 17:38:21.802738] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.802773] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:40.162 [2024-11-20 17:38:21.802795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.802821] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:40.162 [2024-11-20 17:38:21.802827] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.162 [2024-11-20 17:38:21.802832] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.162 [2024-11-20 17:38:21.802853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.162 [2024-11-20 17:38:21.809097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:40.162 [2024-11-20 17:38:21.809126] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:40.162 [2024-11-20 17:38:21.809133] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:40.162 [2024-11-20 17:38:21.809138] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:40.162 [2024-11-20 17:38:21.809143] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:40.162 [2024-11-20 17:38:21.809149] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:40.162 [2024-11-20 17:38:21.809154] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:40.162 [2024-11-20 17:38:21.809160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.809171] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.809187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:40.162 [2024-11-20 17:38:21.817084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:40.162 [2024-11-20 17:38:21.817127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.162 [2024-11-20 17:38:21.817139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.162 [2024-11-20 17:38:21.817149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.162 [2024-11-20 17:38:21.817159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.162 [2024-11-20 17:38:21.817166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.817194] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.817207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:40.162 [2024-11-20 17:38:21.825079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:40.162 [2024-11-20 17:38:21.825109] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:40.162 [2024-11-20 17:38:21.825118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.825131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.825150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.825167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:40.162 [2024-11-20 17:38:21.833080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:40.162 [2024-11-20 17:38:21.833185] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.833202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.833215] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:40.162 [2024-11-20 17:38:21.833222] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:40.162 [2024-11-20 17:38:21.833226] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.162 [2024-11-20 17:38:21.833235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:40.162 [2024-11-20 17:38:21.841077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:40.162 [2024-11-20 17:38:21.841114] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:40.162 [2024-11-20 17:38:21.841133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.841147] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:40.162 [2024-11-20 17:38:21.841157] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:40.162 [2024-11-20 17:38:21.841163] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.162 [2024-11-20 17:38:21.841166] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.163 [2024-11-20 17:38:21.841175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.849080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.849151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.849166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.849177] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:40.163 [2024-11-20 17:38:21.849182] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.163 [2024-11-20 17:38:21.849186] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.163 [2024-11-20 17:38:21.849195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.857083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.857112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.857123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.857137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.857144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.857150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.857156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.857163] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:40.163 [2024-11-20 17:38:21.857168] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:40.163 [2024-11-20 17:38:21.857174] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:40.163 [2024-11-20 17:38:21.857218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.868083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.868118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.876081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.876113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.884083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.884114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.892082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.892128] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:40.163 [2024-11-20 17:38:21.892136] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:40.163 [2024-11-20 17:38:21.892140] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:40.163 [2024-11-20 17:38:21.892144] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:40.163 [2024-11-20 17:38:21.892148] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:40.163 [2024-11-20 17:38:21.892157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:40.163 [2024-11-20 17:38:21.892177] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:40.163 [2024-11-20 17:38:21.892182] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:40.163 [2024-11-20 17:38:21.892186] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.163 [2024-11-20 17:38:21.892193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.892201] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:40.163 [2024-11-20 17:38:21.892206] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.163 [2024-11-20 17:38:21.892210] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.163 [2024-11-20 17:38:21.892217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.892226] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:40.163 [2024-11-20 17:38:21.892231] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:40.163 [2024-11-20 17:38:21.892235] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.163 [2024-11-20 17:38:21.892241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:40.163 [2024-11-20 17:38:21.900082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.900118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.900141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:40.163 [2024-11-20 17:38:21.900150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:40.163 ===================================================== 00:17:40.163 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:40.163 ===================================================== 00:17:40.163 Controller Capabilities/Features 00:17:40.163 ================================ 00:17:40.163 Vendor ID: 4e58 00:17:40.163 Subsystem Vendor ID: 4e58 00:17:40.163 Serial Number: SPDK2 00:17:40.163 Model Number: SPDK bdev Controller 00:17:40.163 Firmware Version: 24.09.1 00:17:40.163 Recommended Arb Burst: 6 00:17:40.163 IEEE OUI Identifier: 8d 6b 50 00:17:40.163 Multi-path I/O 00:17:40.163 May have multiple subsystem ports: Yes 00:17:40.163 May have multiple controllers: Yes 00:17:40.163 Associated with SR-IOV VF: No 00:17:40.163 Max Data Transfer Size: 131072 00:17:40.163 Max Number of Namespaces: 32 00:17:40.163 Max Number of I/O Queues: 127 00:17:40.163 NVMe Specification Version (VS): 1.3 00:17:40.163 NVMe Specification Version (Identify): 1.3 00:17:40.163 Maximum Queue Entries: 256 00:17:40.163 Contiguous Queues Required: Yes 00:17:40.163 Arbitration Mechanisms Supported 00:17:40.163 Weighted Round Robin: Not Supported 00:17:40.163 Vendor Specific: Not Supported 00:17:40.163 Reset Timeout: 15000 ms 00:17:40.163 Doorbell Stride: 4 bytes 00:17:40.163 NVM Subsystem Reset: Not Supported 00:17:40.163 Command Sets Supported 00:17:40.163 NVM Command Set: Supported 00:17:40.163 Boot Partition: Not Supported 00:17:40.163 Memory Page Size Minimum: 4096 bytes 00:17:40.163 Memory Page Size Maximum: 4096 bytes 00:17:40.163 Persistent Memory Region: Not Supported 00:17:40.163 Optional Asynchronous Events Supported 00:17:40.163 Namespace Attribute Notices: Supported 00:17:40.163 Firmware Activation Notices: Not Supported 00:17:40.163 ANA Change Notices: Not Supported 00:17:40.163 PLE Aggregate Log Change Notices: Not Supported 00:17:40.163 LBA Status Info Alert Notices: Not Supported 00:17:40.163 EGE Aggregate Log Change Notices: Not Supported 00:17:40.163 Normal NVM Subsystem Shutdown event: Not Supported 00:17:40.163 Zone Descriptor Change Notices: Not Supported 00:17:40.163 Discovery Log Change Notices: Not Supported 00:17:40.163 Controller Attributes 00:17:40.163 128-bit Host Identifier: Supported 00:17:40.163 Non-Operational Permissive Mode: Not Supported 00:17:40.163 NVM Sets: Not Supported 00:17:40.163 Read Recovery Levels: Not Supported 00:17:40.163 Endurance Groups: Not Supported 00:17:40.163 Predictable Latency Mode: Not Supported 00:17:40.163 Traffic Based Keep ALive: Not Supported 00:17:40.163 Namespace Granularity: Not Supported 00:17:40.163 SQ Associations: Not Supported 00:17:40.163 UUID List: Not Supported 00:17:40.163 Multi-Domain Subsystem: Not Supported 00:17:40.163 Fixed Capacity Management: Not Supported 00:17:40.163 Variable Capacity Management: Not Supported 00:17:40.163 Delete Endurance Group: Not Supported 00:17:40.163 Delete NVM Set: Not Supported 00:17:40.163 Extended LBA Formats Supported: Not Supported 00:17:40.163 Flexible Data Placement Supported: Not Supported 00:17:40.163 00:17:40.163 Controller Memory Buffer Support 00:17:40.163 ================================ 00:17:40.163 Supported: No 00:17:40.163 00:17:40.163 Persistent Memory Region Support 00:17:40.163 ================================ 00:17:40.163 Supported: No 00:17:40.163 00:17:40.163 Admin Command Set Attributes 00:17:40.163 ============================ 00:17:40.163 Security Send/Receive: Not Supported 00:17:40.163 Format NVM: Not Supported 00:17:40.163 Firmware Activate/Download: Not Supported 00:17:40.163 Namespace Management: Not Supported 00:17:40.163 Device Self-Test: Not Supported 00:17:40.163 Directives: Not Supported 00:17:40.164 NVMe-MI: Not Supported 00:17:40.164 Virtualization Management: Not Supported 00:17:40.164 Doorbell Buffer Config: Not Supported 00:17:40.164 Get LBA Status Capability: Not Supported 00:17:40.164 Command & Feature Lockdown Capability: Not Supported 00:17:40.164 Abort Command Limit: 4 00:17:40.164 Async Event Request Limit: 4 00:17:40.164 Number of Firmware Slots: N/A 00:17:40.164 Firmware Slot 1 Read-Only: N/A 00:17:40.164 Firmware Activation Without Reset: N/A 00:17:40.164 Multiple Update Detection Support: N/A 00:17:40.164 Firmware Update Granularity: No Information Provided 00:17:40.164 Per-Namespace SMART Log: No 00:17:40.164 Asymmetric Namespace Access Log Page: Not Supported 00:17:40.164 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:40.164 Command Effects Log Page: Supported 00:17:40.164 Get Log Page Extended Data: Supported 00:17:40.164 Telemetry Log Pages: Not Supported 00:17:40.164 Persistent Event Log Pages: Not Supported 00:17:40.164 Supported Log Pages Log Page: May Support 00:17:40.164 Commands Supported & Effects Log Page: Not Supported 00:17:40.164 Feature Identifiers & Effects Log Page:May Support 00:17:40.164 NVMe-MI Commands & Effects Log Page: May Support 00:17:40.164 Data Area 4 for Telemetry Log: Not Supported 00:17:40.164 Error Log Page Entries Supported: 128 00:17:40.164 Keep Alive: Supported 00:17:40.164 Keep Alive Granularity: 10000 ms 00:17:40.164 00:17:40.164 NVM Command Set Attributes 00:17:40.164 ========================== 00:17:40.164 Submission Queue Entry Size 00:17:40.164 Max: 64 00:17:40.164 Min: 64 00:17:40.164 Completion Queue Entry Size 00:17:40.164 Max: 16 00:17:40.164 Min: 16 00:17:40.164 Number of Namespaces: 32 00:17:40.164 Compare Command: Supported 00:17:40.164 Write Uncorrectable Command: Not Supported 00:17:40.164 Dataset Management Command: Supported 00:17:40.164 Write Zeroes Command: Supported 00:17:40.164 Set Features Save Field: Not Supported 00:17:40.164 Reservations: Not Supported 00:17:40.164 Timestamp: Not Supported 00:17:40.164 Copy: Supported 00:17:40.164 Volatile Write Cache: Present 00:17:40.164 Atomic Write Unit (Normal): 1 00:17:40.164 Atomic Write Unit (PFail): 1 00:17:40.164 Atomic Compare & Write Unit: 1 00:17:40.164 Fused Compare & Write: Supported 00:17:40.164 Scatter-Gather List 00:17:40.164 SGL Command Set: Supported (Dword aligned) 00:17:40.164 SGL Keyed: Not Supported 00:17:40.164 SGL Bit Bucket Descriptor: Not Supported 00:17:40.164 SGL Metadata Pointer: Not Supported 00:17:40.164 Oversized SGL: Not Supported 00:17:40.164 SGL Metadata Address: Not Supported 00:17:40.164 SGL Offset: Not Supported 00:17:40.164 Transport SGL Data Block: Not Supported 00:17:40.164 Replay Protected Memory Block: Not Supported 00:17:40.164 00:17:40.164 Firmware Slot Information 00:17:40.164 ========================= 00:17:40.164 Active slot: 1 00:17:40.164 Slot 1 Firmware Revision: 24.09.1 00:17:40.164 00:17:40.164 00:17:40.164 Commands Supported and Effects 00:17:40.164 ============================== 00:17:40.164 Admin Commands 00:17:40.164 -------------- 00:17:40.164 Get Log Page (02h): Supported 00:17:40.164 Identify (06h): Supported 00:17:40.164 Abort (08h): Supported 00:17:40.164 Set Features (09h): Supported 00:17:40.164 Get Features (0Ah): Supported 00:17:40.164 Asynchronous Event Request (0Ch): Supported 00:17:40.164 Keep Alive (18h): Supported 00:17:40.164 I/O Commands 00:17:40.164 ------------ 00:17:40.164 Flush (00h): Supported LBA-Change 00:17:40.164 Write (01h): Supported LBA-Change 00:17:40.164 Read (02h): Supported 00:17:40.164 Compare (05h): Supported 00:17:40.164 Write Zeroes (08h): Supported LBA-Change 00:17:40.164 Dataset Management (09h): Supported LBA-Change 00:17:40.164 Copy (19h): Supported LBA-Change 00:17:40.164 00:17:40.164 Error Log 00:17:40.164 ========= 00:17:40.164 00:17:40.164 Arbitration 00:17:40.164 =========== 00:17:40.164 Arbitration Burst: 1 00:17:40.164 00:17:40.164 Power Management 00:17:40.164 ================ 00:17:40.164 Number of Power States: 1 00:17:40.164 Current Power State: Power State #0 00:17:40.164 Power State #0: 00:17:40.164 Max Power: 0.00 W 00:17:40.164 Non-Operational State: Operational 00:17:40.164 Entry Latency: Not Reported 00:17:40.164 Exit Latency: Not Reported 00:17:40.164 Relative Read Throughput: 0 00:17:40.164 Relative Read Latency: 0 00:17:40.164 Relative Write Throughput: 0 00:17:40.164 Relative Write Latency: 0 00:17:40.164 Idle Power: Not Reported 00:17:40.164 Active Power: Not Reported 00:17:40.164 Non-Operational Permissive Mode: Not Supported 00:17:40.164 00:17:40.164 Health Information 00:17:40.164 ================== 00:17:40.164 Critical Warnings: 00:17:40.164 Available Spare Space: OK 00:17:40.164 Temperature: OK 00:17:40.164 Device Reliability: OK 00:17:40.164 Read Only: No 00:17:40.164 Volatile Memory Backup: OK 00:17:40.164 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:40.164 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:40.164 Available Spare: 0% 00:17:40.164 Availabl[2024-11-20 17:38:21.900284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:40.164 [2024-11-20 17:38:21.908083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:40.164 [2024-11-20 17:38:21.908154] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:40.164 [2024-11-20 17:38:21.908169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.164 [2024-11-20 17:38:21.908177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.164 [2024-11-20 17:38:21.908185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.164 [2024-11-20 17:38:21.908193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.164 [2024-11-20 17:38:21.908302] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:40.164 [2024-11-20 17:38:21.908323] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:40.164 [2024-11-20 17:38:21.909303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:40.164 [2024-11-20 17:38:21.909418] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:40.164 [2024-11-20 17:38:21.909430] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:40.164 [2024-11-20 17:38:21.910300] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:40.164 [2024-11-20 17:38:21.910332] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:40.164 [2024-11-20 17:38:21.910594] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:40.164 [2024-11-20 17:38:21.911975] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:40.423 e Spare Threshold: 0% 00:17:40.423 Life Percentage Used: 0% 00:17:40.423 Data Units Read: 0 00:17:40.423 Data Units Written: 0 00:17:40.423 Host Read Commands: 0 00:17:40.423 Host Write Commands: 0 00:17:40.423 Controller Busy Time: 0 minutes 00:17:40.423 Power Cycles: 0 00:17:40.423 Power On Hours: 0 hours 00:17:40.423 Unsafe Shutdowns: 0 00:17:40.423 Unrecoverable Media Errors: 0 00:17:40.423 Lifetime Error Log Entries: 0 00:17:40.423 Warning Temperature Time: 0 minutes 00:17:40.423 Critical Temperature Time: 0 minutes 00:17:40.423 00:17:40.423 Number of Queues 00:17:40.423 ================ 00:17:40.423 Number of I/O Submission Queues: 127 00:17:40.423 Number of I/O Completion Queues: 127 00:17:40.423 00:17:40.423 Active Namespaces 00:17:40.423 ================= 00:17:40.423 Namespace ID:1 00:17:40.423 Error Recovery Timeout: Unlimited 00:17:40.423 Command Set Identifier: NVM (00h) 00:17:40.423 Deallocate: Supported 00:17:40.423 Deallocated/Unwritten Error: Not Supported 00:17:40.423 Deallocated Read Value: Unknown 00:17:40.423 Deallocate in Write Zeroes: Not Supported 00:17:40.423 Deallocated Guard Field: 0xFFFF 00:17:40.423 Flush: Supported 00:17:40.423 Reservation: Supported 00:17:40.423 Namespace Sharing Capabilities: Multiple Controllers 00:17:40.423 Size (in LBAs): 131072 (0GiB) 00:17:40.423 Capacity (in LBAs): 131072 (0GiB) 00:17:40.423 Utilization (in LBAs): 131072 (0GiB) 00:17:40.423 NGUID: C5D629493B944BF088EC0F2A986FDAFA 00:17:40.423 UUID: c5d62949-3b94-4bf0-88ec-0f2a986fdafa 00:17:40.423 Thin Provisioning: Not Supported 00:17:40.423 Per-NS Atomic Units: Yes 00:17:40.423 Atomic Boundary Size (Normal): 0 00:17:40.423 Atomic Boundary Size (PFail): 0 00:17:40.423 Atomic Boundary Offset: 0 00:17:40.423 Maximum Single Source Range Length: 65535 00:17:40.424 Maximum Copy Length: 65535 00:17:40.424 Maximum Source Range Count: 1 00:17:40.424 NGUID/EUI64 Never Reused: No 00:17:40.424 Namespace Write Protected: No 00:17:40.424 Number of LBA Formats: 1 00:17:40.424 Current LBA Format: LBA Format #00 00:17:40.424 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:40.424 00:17:40.424 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:40.681 [2024-11-20 17:38:22.275559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:45.946 Initializing NVMe Controllers 00:17:45.946 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:45.946 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:45.946 Initialization complete. Launching workers. 00:17:45.946 ======================================================== 00:17:45.946 Latency(us) 00:17:45.946 Device Information : IOPS MiB/s Average min max 00:17:45.946 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29443.61 115.01 4346.45 1247.31 9026.36 00:17:45.946 ======================================================== 00:17:45.946 Total : 29443.61 115.01 4346.45 1247.31 9026.36 00:17:45.946 00:17:45.946 [2024-11-20 17:38:27.368820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:45.946 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:46.205 [2024-11-20 17:38:27.766643] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:51.466 Initializing NVMe Controllers 00:17:51.466 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:51.466 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:51.466 Initialization complete. Launching workers. 00:17:51.466 ======================================================== 00:17:51.466 Latency(us) 00:17:51.466 Device Information : IOPS MiB/s Average min max 00:17:51.466 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29569.63 115.51 4327.80 1268.10 7647.88 00:17:51.466 ======================================================== 00:17:51.466 Total : 29569.63 115.51 4327.80 1268.10 7647.88 00:17:51.466 00:17:51.466 [2024-11-20 17:38:32.777033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:51.466 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:51.466 [2024-11-20 17:38:33.058616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:56.769 [2024-11-20 17:38:38.183246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:56.769 Initializing NVMe Controllers 00:17:56.769 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:56.769 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:56.769 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:56.769 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:56.769 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:56.769 Initialization complete. Launching workers. 00:17:56.769 Starting thread on core 2 00:17:56.769 Starting thread on core 3 00:17:56.769 Starting thread on core 1 00:17:56.769 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:57.029 [2024-11-20 17:38:38.619793] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:00.509 [2024-11-20 17:38:41.694852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:00.509 Initializing NVMe Controllers 00:18:00.509 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:00.509 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:00.509 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:00.509 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:00.509 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:00.509 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:00.509 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:18:00.509 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:00.509 Initialization complete. Launching workers. 00:18:00.509 Starting thread on core 1 with urgent priority queue 00:18:00.509 Starting thread on core 2 with urgent priority queue 00:18:00.509 Starting thread on core 3 with urgent priority queue 00:18:00.509 Starting thread on core 0 with urgent priority queue 00:18:00.509 SPDK bdev Controller (SPDK2 ) core 0: 2612.33 IO/s 38.28 secs/100000 ios 00:18:00.509 SPDK bdev Controller (SPDK2 ) core 1: 2560.00 IO/s 39.06 secs/100000 ios 00:18:00.509 SPDK bdev Controller (SPDK2 ) core 2: 2773.33 IO/s 36.06 secs/100000 ios 00:18:00.509 SPDK bdev Controller (SPDK2 ) core 3: 3008.00 IO/s 33.24 secs/100000 ios 00:18:00.509 ======================================================== 00:18:00.509 00:18:00.509 17:38:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:00.509 [2024-11-20 17:38:42.076481] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:00.509 Initializing NVMe Controllers 00:18:00.509 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:00.509 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:00.509 Namespace ID: 1 size: 0GB 00:18:00.509 Initialization complete. 00:18:00.509 INFO: using host memory buffer for IO 00:18:00.509 Hello world! 00:18:00.509 [2024-11-20 17:38:42.087940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:00.510 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:00.788 [2024-11-20 17:38:42.506221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:02.168 Initializing NVMe Controllers 00:18:02.168 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:02.168 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:02.168 Initialization complete. Launching workers. 00:18:02.168 submit (in ns) avg, min, max = 9151.0, 3854.1, 4045260.0 00:18:02.168 complete (in ns) avg, min, max = 23188.2, 2352.7, 4057581.4 00:18:02.168 00:18:02.168 Submit histogram 00:18:02.168 ================ 00:18:02.168 Range in us Cumulative Count 00:18:02.168 3.840 - 3.869: 0.0236% ( 3) 00:18:02.168 3.869 - 3.898: 0.0393% ( 2) 00:18:02.168 3.898 - 3.927: 0.0786% ( 5) 00:18:02.168 3.927 - 3.956: 0.3459% ( 34) 00:18:02.168 3.956 - 3.985: 2.1777% ( 233) 00:18:02.168 3.985 - 4.015: 6.2028% ( 512) 00:18:02.168 4.015 - 4.044: 13.6321% ( 945) 00:18:02.168 4.044 - 4.073: 26.0142% ( 1575) 00:18:02.168 4.073 - 4.102: 39.0409% ( 1657) 00:18:02.168 4.102 - 4.131: 48.9230% ( 1257) 00:18:02.168 4.131 - 4.160: 56.5487% ( 970) 00:18:02.168 4.160 - 4.189: 62.4371% ( 749) 00:18:02.168 4.189 - 4.218: 66.0613% ( 461) 00:18:02.168 4.218 - 4.247: 68.7107% ( 337) 00:18:02.168 4.247 - 4.276: 70.9984% ( 291) 00:18:02.168 4.276 - 4.305: 73.0110% ( 256) 00:18:02.168 4.305 - 4.335: 74.4733% ( 186) 00:18:02.168 4.335 - 4.364: 76.2814% ( 230) 00:18:02.168 4.364 - 4.393: 77.9717% ( 215) 00:18:02.168 4.393 - 4.422: 79.6855% ( 218) 00:18:02.168 4.422 - 4.451: 81.2500% ( 199) 00:18:02.168 4.451 - 4.480: 82.5708% ( 168) 00:18:02.168 4.480 - 4.509: 83.6242% ( 134) 00:18:02.168 4.509 - 4.538: 84.4654% ( 107) 00:18:02.168 4.538 - 4.567: 85.2673% ( 102) 00:18:02.168 4.567 - 4.596: 85.8412% ( 73) 00:18:02.168 4.596 - 4.625: 86.2657% ( 54) 00:18:02.168 4.625 - 4.655: 86.7925% ( 67) 00:18:02.168 4.655 - 4.684: 87.1462% ( 45) 00:18:02.168 4.684 - 4.713: 87.5472% ( 51) 00:18:02.168 4.713 - 4.742: 88.0267% ( 61) 00:18:02.168 4.742 - 4.771: 88.5220% ( 63) 00:18:02.168 4.771 - 4.800: 88.9308% ( 52) 00:18:02.168 4.800 - 4.829: 89.4811% ( 70) 00:18:02.168 4.829 - 4.858: 90.0393% ( 71) 00:18:02.168 4.858 - 4.887: 90.6525% ( 78) 00:18:02.168 4.887 - 4.916: 91.1950% ( 69) 00:18:02.168 4.916 - 4.945: 91.8082% ( 78) 00:18:02.168 4.945 - 4.975: 92.2799% ( 60) 00:18:02.168 4.975 - 5.004: 92.7123% ( 55) 00:18:02.168 5.004 - 5.033: 93.1132% ( 51) 00:18:02.168 5.033 - 5.062: 93.4355% ( 41) 00:18:02.168 5.062 - 5.091: 93.8050% ( 47) 00:18:02.168 5.091 - 5.120: 94.0016% ( 25) 00:18:02.168 5.120 - 5.149: 94.2453% ( 31) 00:18:02.168 5.149 - 5.178: 94.4575% ( 27) 00:18:02.169 5.178 - 5.207: 94.6619% ( 26) 00:18:02.169 5.207 - 5.236: 94.8664% ( 26) 00:18:02.169 5.236 - 5.265: 95.0079% ( 18) 00:18:02.169 5.265 - 5.295: 95.1887% ( 23) 00:18:02.169 5.295 - 5.324: 95.3459% ( 20) 00:18:02.169 5.324 - 5.353: 95.5189% ( 22) 00:18:02.169 5.353 - 5.382: 95.6053% ( 11) 00:18:02.169 5.382 - 5.411: 95.6604% ( 7) 00:18:02.169 5.411 - 5.440: 95.7783% ( 15) 00:18:02.169 5.440 - 5.469: 95.8648% ( 11) 00:18:02.169 5.469 - 5.498: 95.9513% ( 11) 00:18:02.169 5.498 - 5.527: 96.0770% ( 16) 00:18:02.169 5.527 - 5.556: 96.2028% ( 16) 00:18:02.169 5.556 - 5.585: 96.3050% ( 13) 00:18:02.169 5.585 - 5.615: 96.3758% ( 9) 00:18:02.169 5.615 - 5.644: 96.4780% ( 13) 00:18:02.169 5.644 - 5.673: 96.5723% ( 12) 00:18:02.169 5.673 - 5.702: 96.6195% ( 6) 00:18:02.169 5.702 - 5.731: 96.7060% ( 11) 00:18:02.169 5.731 - 5.760: 96.7689% ( 8) 00:18:02.169 5.760 - 5.789: 96.8082% ( 5) 00:18:02.169 5.789 - 5.818: 96.8553% ( 6) 00:18:02.169 5.818 - 5.847: 96.9497% ( 12) 00:18:02.169 5.847 - 5.876: 96.9969% ( 6) 00:18:02.169 5.876 - 5.905: 97.0755% ( 10) 00:18:02.169 5.905 - 5.935: 97.1384% ( 8) 00:18:02.169 5.935 - 5.964: 97.2091% ( 9) 00:18:02.169 5.964 - 5.993: 97.2642% ( 7) 00:18:02.169 5.993 - 6.022: 97.3349% ( 9) 00:18:02.169 6.022 - 6.051: 97.4292% ( 12) 00:18:02.169 6.051 - 6.080: 97.4607% ( 4) 00:18:02.169 6.080 - 6.109: 97.4921% ( 4) 00:18:02.169 6.109 - 6.138: 97.5157% ( 3) 00:18:02.169 6.138 - 6.167: 97.5943% ( 10) 00:18:02.169 6.167 - 6.196: 97.6179% ( 3) 00:18:02.169 6.196 - 6.225: 97.6336% ( 2) 00:18:02.169 6.225 - 6.255: 97.6651% ( 4) 00:18:02.169 6.255 - 6.284: 97.7201% ( 7) 00:18:02.169 6.284 - 6.313: 97.7909% ( 9) 00:18:02.169 6.313 - 6.342: 97.8459% ( 7) 00:18:02.169 6.342 - 6.371: 97.8774% ( 4) 00:18:02.169 6.371 - 6.400: 97.9245% ( 6) 00:18:02.169 6.400 - 6.429: 97.9560% ( 4) 00:18:02.169 6.429 - 6.458: 97.9638% ( 1) 00:18:02.169 6.458 - 6.487: 97.9874% ( 3) 00:18:02.169 6.487 - 6.516: 98.0267% ( 5) 00:18:02.169 6.516 - 6.545: 98.0425% ( 2) 00:18:02.169 6.545 - 6.575: 98.0582% ( 2) 00:18:02.169 6.575 - 6.604: 98.0660% ( 1) 00:18:02.169 6.604 - 6.633: 98.1053% ( 5) 00:18:02.169 6.633 - 6.662: 98.1132% ( 1) 00:18:02.169 6.662 - 6.691: 98.1211% ( 1) 00:18:02.169 6.691 - 6.720: 98.1604% ( 5) 00:18:02.169 6.720 - 6.749: 98.1682% ( 1) 00:18:02.169 6.749 - 6.778: 98.1918% ( 3) 00:18:02.169 6.778 - 6.807: 98.2075% ( 2) 00:18:02.169 6.807 - 6.836: 98.2311% ( 3) 00:18:02.169 6.836 - 6.865: 98.2547% ( 3) 00:18:02.169 6.865 - 6.895: 98.2626% ( 1) 00:18:02.169 6.895 - 6.924: 98.2783% ( 2) 00:18:02.169 6.924 - 6.953: 98.2940% ( 2) 00:18:02.169 6.953 - 6.982: 98.3019% ( 1) 00:18:02.169 6.982 - 7.011: 98.3097% ( 1) 00:18:02.169 7.011 - 7.040: 98.3176% ( 1) 00:18:02.169 7.040 - 7.069: 98.3255% ( 1) 00:18:02.169 7.069 - 7.098: 98.3333% ( 1) 00:18:02.169 7.098 - 7.127: 98.3491% ( 2) 00:18:02.169 7.127 - 7.156: 98.3569% ( 1) 00:18:02.169 7.156 - 7.185: 98.3648% ( 1) 00:18:02.169 7.215 - 7.244: 98.3726% ( 1) 00:18:02.169 7.273 - 7.302: 98.3805% ( 1) 00:18:02.169 7.302 - 7.331: 98.3884% ( 1) 00:18:02.169 7.360 - 7.389: 98.4041% ( 2) 00:18:02.169 7.418 - 7.447: 98.4119% ( 1) 00:18:02.169 7.505 - 7.564: 98.4198% ( 1) 00:18:02.169 7.622 - 7.680: 98.4434% ( 3) 00:18:02.169 7.680 - 7.738: 98.4513% ( 1) 00:18:02.169 7.738 - 7.796: 98.4670% ( 2) 00:18:02.169 7.796 - 7.855: 98.4748% ( 1) 00:18:02.169 7.855 - 7.913: 98.4827% ( 1) 00:18:02.169 7.971 - 8.029: 98.4984% ( 2) 00:18:02.169 8.029 - 8.087: 98.5063% ( 1) 00:18:02.169 8.087 - 8.145: 98.5299% ( 3) 00:18:02.169 8.145 - 8.204: 98.5377% ( 1) 00:18:02.169 8.204 - 8.262: 98.5692% ( 4) 00:18:02.169 8.262 - 8.320: 98.5928% ( 3) 00:18:02.169 8.320 - 8.378: 98.6321% ( 5) 00:18:02.169 8.378 - 8.436: 98.6399% ( 1) 00:18:02.169 8.436 - 8.495: 98.6635% ( 3) 00:18:02.169 8.495 - 8.553: 98.6714% ( 1) 00:18:02.169 8.553 - 8.611: 98.6871% ( 2) 00:18:02.169 8.611 - 8.669: 98.6950% ( 1) 00:18:02.169 8.669 - 8.727: 98.7264% ( 4) 00:18:02.169 8.727 - 8.785: 98.7421% ( 2) 00:18:02.169 8.785 - 8.844: 98.7579% ( 2) 00:18:02.169 8.902 - 8.960: 98.7736% ( 2) 00:18:02.169 8.960 - 9.018: 98.7893% ( 2) 00:18:02.169 9.076 - 9.135: 98.7972% ( 1) 00:18:02.169 9.135 - 9.193: 98.8129% ( 2) 00:18:02.169 9.193 - 9.251: 98.8208% ( 1) 00:18:02.169 9.251 - 9.309: 98.8365% ( 2) 00:18:02.169 9.309 - 9.367: 98.8443% ( 1) 00:18:02.169 9.367 - 9.425: 98.8522% ( 1) 00:18:02.169 9.425 - 9.484: 98.8601% ( 1) 00:18:02.169 9.484 - 9.542: 98.8758% ( 2) 00:18:02.169 9.542 - 9.600: 98.8915% ( 2) 00:18:02.169 9.600 - 9.658: 98.9230% ( 4) 00:18:02.169 9.658 - 9.716: 98.9308% ( 1) 00:18:02.169 9.833 - 9.891: 98.9387% ( 1) 00:18:02.169 9.891 - 9.949: 98.9465% ( 1) 00:18:02.169 10.007 - 10.065: 98.9623% ( 2) 00:18:02.169 10.065 - 10.124: 98.9701% ( 1) 00:18:02.169 10.240 - 10.298: 98.9780% ( 1) 00:18:02.169 10.298 - 10.356: 98.9937% ( 2) 00:18:02.169 10.356 - 10.415: 99.0016% ( 1) 00:18:02.169 10.415 - 10.473: 99.0173% ( 2) 00:18:02.169 10.531 - 10.589: 99.0566% ( 5) 00:18:02.169 10.589 - 10.647: 99.0723% ( 2) 00:18:02.169 10.705 - 10.764: 99.0881% ( 2) 00:18:02.169 10.822 - 10.880: 99.0959% ( 1) 00:18:02.169 10.880 - 10.938: 99.1038% ( 1) 00:18:02.169 11.055 - 11.113: 99.1352% ( 4) 00:18:02.169 11.171 - 11.229: 99.1431% ( 1) 00:18:02.169 11.229 - 11.287: 99.1667% ( 3) 00:18:02.169 11.287 - 11.345: 99.1745% ( 1) 00:18:02.169 11.345 - 11.404: 99.1824% ( 1) 00:18:02.169 11.404 - 11.462: 99.1903% ( 1) 00:18:02.169 11.462 - 11.520: 99.2138% ( 3) 00:18:02.169 11.578 - 11.636: 99.2217% ( 1) 00:18:02.169 11.636 - 11.695: 99.2374% ( 2) 00:18:02.169 11.695 - 11.753: 99.2453% ( 1) 00:18:02.169 11.811 - 11.869: 99.2610% ( 2) 00:18:02.169 11.927 - 11.985: 99.2767% ( 2) 00:18:02.169 11.985 - 12.044: 99.2925% ( 2) 00:18:02.169 12.044 - 12.102: 99.3160% ( 3) 00:18:02.169 12.102 - 12.160: 99.3396% ( 3) 00:18:02.169 12.160 - 12.218: 99.3475% ( 1) 00:18:02.169 12.335 - 12.393: 99.3553% ( 1) 00:18:02.169 12.393 - 12.451: 99.3711% ( 2) 00:18:02.169 12.451 - 12.509: 99.3789% ( 1) 00:18:02.169 12.509 - 12.567: 99.3868% ( 1) 00:18:02.169 12.567 - 12.625: 99.3947% ( 1) 00:18:02.169 12.684 - 12.742: 99.4104% ( 2) 00:18:02.169 12.742 - 12.800: 99.4182% ( 1) 00:18:02.169 12.800 - 12.858: 99.4261% ( 1) 00:18:02.169 12.858 - 12.916: 99.4340% ( 1) 00:18:02.169 12.975 - 13.033: 99.4497% ( 2) 00:18:02.169 13.033 - 13.091: 99.4654% ( 2) 00:18:02.169 13.091 - 13.149: 99.4733% ( 1) 00:18:02.169 13.265 - 13.324: 99.4890% ( 2) 00:18:02.169 13.324 - 13.382: 99.4969% ( 1) 00:18:02.169 13.440 - 13.498: 99.5126% ( 2) 00:18:02.169 13.498 - 13.556: 99.5204% ( 1) 00:18:02.169 13.964 - 14.022: 99.5283% ( 1) 00:18:02.169 14.080 - 14.138: 99.5362% ( 1) 00:18:02.169 14.196 - 14.255: 99.5440% ( 1) 00:18:02.169 14.255 - 14.313: 99.5519% ( 1) 00:18:02.169 14.313 - 14.371: 99.5597% ( 1) 00:18:02.169 14.429 - 14.487: 99.5676% ( 1) 00:18:02.169 14.895 - 15.011: 99.5755% ( 1) 00:18:02.169 15.011 - 15.127: 99.5991% ( 3) 00:18:02.169 15.244 - 15.360: 99.6148% ( 2) 00:18:02.169 15.476 - 15.593: 99.6305% ( 2) 00:18:02.169 15.593 - 15.709: 99.6619% ( 4) 00:18:02.169 15.825 - 15.942: 99.6855% ( 3) 00:18:02.169 15.942 - 16.058: 99.6934% ( 1) 00:18:02.169 16.175 - 16.291: 99.7013% ( 1) 00:18:02.169 16.291 - 16.407: 99.7091% ( 1) 00:18:02.169 16.524 - 16.640: 99.7170% ( 1) 00:18:02.169 16.989 - 17.105: 99.7248% ( 1) 00:18:02.169 17.338 - 17.455: 99.7406% ( 2) 00:18:02.169 18.036 - 18.153: 99.7563% ( 2) 00:18:02.169 19.084 - 19.200: 99.7642% ( 1) 00:18:02.169 19.200 - 19.316: 99.7720% ( 1) 00:18:02.169 19.782 - 19.898: 99.7799% ( 1) 00:18:02.169 20.131 - 20.247: 99.7956% ( 2) 00:18:02.169 20.596 - 20.713: 99.8035% ( 1) 00:18:02.169 21.295 - 21.411: 99.8113% ( 1) 00:18:02.169 21.527 - 21.644: 99.8192% ( 1) 00:18:02.169 21.760 - 21.876: 99.8270% ( 1) 00:18:02.169 22.575 - 22.691: 99.8349% ( 1) 00:18:02.169 25.600 - 25.716: 99.8428% ( 1) 00:18:02.169 27.229 - 27.345: 99.8506% ( 1) 00:18:02.169 27.927 - 28.044: 99.8585% ( 1) 00:18:02.169 29.789 - 30.022: 99.8664% ( 1) 00:18:02.169 31.651 - 31.884: 99.8742% ( 1) 00:18:02.170 40.960 - 41.193: 99.8821% ( 1) 00:18:02.170 3991.738 - 4021.527: 99.8899% ( 1) 00:18:02.170 4021.527 - 4051.316: 100.0000% ( 14) 00:18:02.170 00:18:02.170 Complete histogram 00:18:02.170 ================== 00:18:02.170 Range in us Cumulative Count 00:18:02.170 2.342 - 2.356: 0.0629% ( 8) 00:18:02.170 2.356 - 2.371: 5.5110% ( 693) 00:18:02.170 2.371 - 2.385: 26.1950% ( 2631) 00:18:02.170 2.385 - 2.400: 46.0692% ( 2528) 00:18:02.170 2.400 - 2.415: 61.4780% ( 1960) 00:18:02.170 2.415 - 2.429: 65.8333% ( 554) 00:18:02.170 2.429 - 2.444: 68.1918% ( 300) 00:18:02.170 2.444 - 2.458: 71.0613% ( 365) 00:18:02.170 2.458 - 2.473: 74.3789% ( 422) 00:18:02.170 2.473 - 2.487: 77.5708% ( 406) 00:18:02.170 2.487 - 2.502: 79.7563% ( 278) 00:18:02.170 2.502 - 2.516: 81.2814% ( 194) 00:18:02.170 2.516 - 2.531: 82.1619% ( 112) 00:18:02.170 2.531 - 2.545: 83.0425% ( 112) 00:18:02.170 2.545 - 2.560: 83.7579% ( 91) 00:18:02.170 2.560 - 2.575: 84.5126% ( 96) 00:18:02.170 2.575 - 2.589: 85.1572% ( 82) 00:18:02.170 2.589 - 2.604: 85.5582% ( 51) 00:18:02.170 2.604 - 2.618: 86.0377% ( 61) 00:18:02.170 2.618 - 2.633: 86.4308% ( 50) 00:18:02.170 2.633 - 2.647: 86.8003% ( 47) 00:18:02.170 2.647 - 2.662: 87.1226% ( 41) 00:18:02.170 2.662 - 2.676: 87.3821% ( 33) 00:18:02.170 2.676 - 2.691: 87.6494% ( 34) 00:18:02.170 2.691 - 2.705: 87.8459% ( 25) 00:18:02.170 2.705 - 2.720: 88.1132% ( 34) 00:18:02.170 2.720 - 2.735: 88.2940% ( 23) 00:18:02.170 2.735 - 2.749: 88.4827% ( 24) 00:18:02.170 2.749 - 2.764: 88.7107% ( 29) 00:18:02.170 2.764 - 2.778: 88.9623% ( 32) 00:18:02.170 2.778 - 2.793: 89.1824% ( 28) 00:18:02.170 2.793 - 2.807: 89.3160% ( 17) 00:18:02.170 2.807 - 2.822: 89.6226% ( 39) 00:18:02.170 2.822 - 2.836: 89.8664% ( 31) 00:18:02.170 2.836 - 2.851: 90.1965% ( 42) 00:18:02.170 2.851 - 2.865: 90.3695% ( 22) 00:18:02.170 2.865 - 2.880: 90.6289% ( 33) 00:18:02.170 2.880 - 2.895: 90.9513% ( 41) 00:18:02.170 2.895 - 2.909: 91.3286% ( 48) 00:18:02.170 2.909 - 2.924: 91.6352% ( 39) 00:18:02.170 2.924 - 2.938: 91.9969% ( 46) 00:18:02.170 2.938 - 2.953: 92.5000% ( 64) 00:18:02.170 2.953 - 2.967: 92.8459% ( 44) 00:18:02.170 2.967 - 2.982: 93.2469% ( 51) 00:18:02.170 2.982 - 2.996: 93.6871% ( 56) 00:18:02.170 2.996 - 3.011: 93.9544% ( 34) 00:18:02.170 3.011 - 3.025: 94.3160% ( 46) 00:18:02.170 3.025 - 3.040: 94.5755% ( 33) 00:18:02.170 3.040 - 3.055: 94.8035% ( 29) 00:18:02.170 3.055 - 3.069: 95.0157% ( 27) 00:18:02.170 3.069 - 3.084: 95.2044% ( 24) 00:18:02.170 3.084 - 3.098: 95.3852% ( 23) 00:18:02.170 3.098 - 3.113: 95.5425% ( 20) 00:18:02.170 3.113 - 3.127: 95.6289% ( 11) 00:18:02.170 3.127 - 3.142: 95.7390% ( 14) 00:18:02.170 3.142 - 3.156: 95.8569% ( 15) 00:18:02.170 3.156 - 3.171: 95.9670% ( 14) 00:18:02.170 3.171 - 3.185: 96.1085% ( 18) 00:18:02.170 3.185 - 3.200: 96.1792% ( 9) 00:18:02.170 3.200 - 3.215: 96.2736% ( 12) 00:18:02.170 3.215 - 3.229: 96.3208% ( 6) 00:18:02.170 3.229 - 3.244: 96.4072% ( 11) 00:18:02.170 3.244 - 3.258: 96.4701% ( 8) 00:18:02.170 3.258 - 3.273: 96.5409% ( 9) 00:18:02.170 3.273 - 3.287: 96.5959% ( 7) 00:18:02.170 3.287 - 3.302: 96.6588% ( 8) 00:18:02.170 3.302 - 3.316: 96.7374% ( 10) 00:18:02.170 3.316 - 3.331: 96.7610% ( 3) 00:18:02.170 3.331 - 3.345: 96.7925% ( 4) 00:18:02.170 3.345 - 3.360: 96.8160% ( 3) 00:18:02.170 3.360 - 3.375: 96.8947% ( 10) 00:18:02.170 3.375 - 3.389: 96.9182% ( 3) 00:18:02.170 3.389 - 3.404: 96.9733% ( 7) 00:18:02.170 3.404 - 3.418: 97.0047% ( 4) 00:18:02.170 3.418 - 3.433: 97.0362% ( 4) 00:18:02.170 3.433 - 3.447: 97.0833% ( 6) 00:18:02.170 3.447 - 3.462: 97.1148% ( 4) 00:18:02.170 3.462 - 3.476: 97.1855% ( 9) 00:18:02.170 3.476 - 3.491: 97.2013% ( 2) 00:18:02.170 3.491 - 3.505: 97.2248% ( 3) 00:18:02.170 3.505 - 3.520: 97.2484% ( 3) 00:18:02.170 3.520 - 3.535: 97.3035% ( 7) 00:18:02.170 3.535 - 3.549: 97.3349% ( 4) 00:18:02.170 3.549 - 3.564: 97.3664% ( 4) 00:18:02.170 3.564 - 3.578: 97.3978% ( 4) 00:18:02.170 3.578 - 3.593: 97.4135% ( 2) 00:18:02.170 3.593 - 3.607: 97.4371% ( 3) 00:18:02.170 3.607 - 3.622: 97.4686% ( 4) 00:18:02.170 3.622 - 3.636: 97.4921% ( 3) 00:18:02.170 3.636 - 3.651: 97.5314% ( 5) 00:18:02.170 3.651 - 3.665: 97.5786% ( 6) 00:18:02.170 3.665 - 3.680: 97.6258% ( 6) 00:18:02.170 3.680 - 3.695: 97.6415% ( 2) 00:18:02.170 3.695 - 3.709: 97.6808% ( 5) 00:18:02.170 3.709 - 3.724: 97.7358% ( 7) 00:18:02.170 3.724 - 3.753: 97.7987% ( 8) 00:18:02.170 3.753 - 3.782: 97.8302% ( 4) 00:18:02.170 3.782 - 3.811: 97.8774% ( 6) 00:18:02.170 3.811 - 3.840: 97.9324% ( 7) 00:18:02.170 3.840 - 3.869: 97.9874% ( 7) 00:18:02.170 3.869 - 3.898: 98.0267% ( 5) 00:18:02.170 3.898 - 3.927: 98.0503% ( 3) 00:18:02.170 3.927 - 3.956: 98.0896% ( 5) 00:18:02.170 3.956 - 3.985: 98.1132% ( 3) 00:18:02.170 3.985 - 4.015: 98.1682% ( 7) 00:18:02.170 4.015 - 4.044: 98.2075% ( 5) 00:18:02.170 4.044 - 4.073: 98.2233% ( 2) 00:18:02.170 4.073 - 4.102: 98.2626% ( 5) 00:18:02.170 4.102 - 4.131: 98.3333% ( 9) 00:18:02.170 4.131 - 4.160: 98.3491% ( 2) 00:18:02.170 4.160 - 4.189: 98.3726% ( 3) 00:18:02.170 4.189 - 4.218: 98.3962% ( 3) 00:18:02.170 4.218 - 4.247: 98.4041% ( 1) 00:18:02.170 4.247 - 4.276: 98.4198% ( 2) 00:18:02.170 4.276 - 4.305: 98.4355% ( 2) 00:18:02.170 4.305 - 4.335: 98.4513% ( 2) 00:18:02.170 4.335 - 4.364: 98.4591% ( 1) 00:18:02.170 4.364 - 4.393: 98.4748% ( 2) 00:18:02.170 4.567 - 4.596: 98.4827% ( 1) 00:18:02.170 4.625 - 4.655: 98.4984% ( 2) 00:18:02.170 4.800 - 4.829: 98.5063% ( 1) 00:18:02.170 4.887 - 4.916: 98.5142% ( 1) 00:18:02.170 4.945 - 4.975: 98.5220% ( 1) 00:18:02.170 5.033 - 5.062: 98.5299% ( 1) 00:18:02.170 5.324 - 5.353: 98.5377% ( 1) 00:18:02.170 5.440 - 5.469: 98.5456% ( 1) 00:18:02.170 5.498 - 5.527: 98.5535% ( 1) 00:18:02.170 5.760 - 5.789: 98.5613% ( 1) 00:18:02.170 5.818 - 5.847: 98.5692% ( 1) 00:18:02.170 5.847 - 5.876: 98.5770% ( 1) 00:18:02.170 6.196 - 6.225: 98.5849% ( 1) 00:18:02.170 6.225 - 6.255: 98.5928% ( 1) 00:18:02.170 6.313 - 6.342: 98.6006% ( 1) 00:18:02.170 6.691 - 6.720: 98.6085% ( 1) 00:18:02.170 6.778 - 6.807: 98.6164% ( 1) 00:18:02.170 7.069 - 7.098: 98.6242% ( 1) 00:18:02.170 7.185 - 7.215: 98.6321% ( 1) 00:18:02.170 7.215 - 7.244: 98.6399% ( 1) 00:18:02.170 7.855 - 7.913: 98.6478% ( 1) 00:18:02.170 7.913 - 7.971: 98.6557% ( 1) 00:18:02.170 8.320 - 8.378: 98.6635% ( 1) 00:18:02.170 8.378 - 8.436: 98.6714% ( 1) 00:18:02.170 8.495 - 8.553: 98.6792% ( 1) 00:18:02.170 8.844 - 8.902: 98.6950% ( 2) 00:18:02.170 8.960 - 9.018: 98.7028% ( 1) 00:18:02.170 9.135 - 9.193: 98.7107% ( 1) 00:18:02.170 9.193 - 9.251: 98.7186% ( 1) 00:18:02.170 9.251 - 9.309: 98.7264% ( 1) 00:18:02.170 9.309 - 9.367: 98.7343% ( 1) 00:18:02.170 9.367 - 9.425: 98.7500% ( 2) 00:18:02.170 9.484 - 9.542: 98.7579% ( 1) 00:18:02.170 9.600 - 9.658: 98.7657% ( 1) 00:18:02.170 9.658 - 9.716: 98.7736% ( 1) 00:18:02.170 9.716 - 9.775: 98.7814% ( 1) 00:18:02.170 9.775 - 9.833: 98.7972% ( 2) 00:18:02.170 9.833 - 9.891: 98.8129% ( 2) 00:18:02.170 9.949 - 10.007: 98.8208% ( 1) 00:18:02.170 10.007 - 10.065: 98.8286% ( 1) 00:18:02.170 10.065 - 10.124: 98.8443% ( 2) 00:18:02.170 10.240 - 10.298: 98.8522% ( 1) 00:18:02.170 10.298 - 10.356: 98.8601% ( 1) 00:18:02.170 10.356 - 10.415: 98.8758% ( 2) 00:18:02.170 10.589 - 10.647: 98.8915% ( 2) 00:18:02.170 10.705 - 10.764: 98.8994% ( 1) 00:18:02.170 10.764 - 10.822: 98.9072% ( 1) 00:18:02.170 10.880 - 10.938: 98.9151% ( 1) 00:18:02.170 10.938 - 10.996: 98.9230% ( 1) 00:18:02.170 11.113 - 11.171: 98.9308% ( 1) 00:18:02.170 11.229 - 11.287: 98.9465% ( 2) 00:18:02.170 11.345 - 11.404: 98.9544% ( 1) 00:18:02.170 11.520 - 11.578: 98.9623% ( 1) 00:18:02.170 11.869 - 11.927: 98.9701% ( 1) 00:18:02.170 11.927 - 11.985: 98.9780% ( 1) 00:18:02.170 12.160 - 12.218: 98.9858% ( 1) 00:18:02.170 12.218 - 12.276: 99.0016% ( 2) 00:18:02.170 12.276 - 12.335: 99.0094% ( 1) 00:18:02.170 12.335 - 12.393: 99.0330% ( 3) 00:18:02.170 12.684 - 12.742: 99.0409% ( 1) 00:18:02.170 12.742 - 12.800: 99.0487% ( 1) 00:18:02.170 12.800 - 12.858: 99.0566% ( 1) 00:18:02.170 12.858 - 12.916: 99.0645% ( 1) 00:18:02.171 12.916 - 12.975: 99.0723% ( 1) 00:18:02.171 13.033 - 13.091: 99.0802% ( 1) 00:18:02.171 13.149 - 13.207: 99.1038% ( 3) 00:18:02.171 13.324 - 13.382: 99.1116% ( 1) 00:18:02.171 13.440 - 13.498: 99.1195% ( 1) 00:18:02.171 13.615 - 13.673: 99.1274% ( 1) 00:18:02.171 13.673 - 13.731: 99.1352% ( 1) 00:18:02.171 13.731 - 13.789: 99.1431% ( 1) 00:18:02.171 13.789 - 13.847: 99.1509% ( 1) 00:18:02.171 13.847 - 13.905: 99.1667% ( [2024-11-20 17:38:43.606201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:02.171 2) 00:18:02.171 13.964 - 14.022: 99.1745% ( 1) 00:18:02.171 14.022 - 14.080: 99.1903% ( 2) 00:18:02.171 14.138 - 14.196: 99.1981% ( 1) 00:18:02.171 14.255 - 14.313: 99.2138% ( 2) 00:18:02.171 14.487 - 14.545: 99.2217% ( 1) 00:18:02.171 14.604 - 14.662: 99.2374% ( 2) 00:18:02.171 14.662 - 14.720: 99.2453% ( 1) 00:18:02.171 15.127 - 15.244: 99.2610% ( 2) 00:18:02.171 15.244 - 15.360: 99.2767% ( 2) 00:18:02.171 15.360 - 15.476: 99.2846% ( 1) 00:18:02.171 15.593 - 15.709: 99.2925% ( 1) 00:18:02.171 15.709 - 15.825: 99.3003% ( 1) 00:18:02.171 15.825 - 15.942: 99.3160% ( 2) 00:18:02.171 17.338 - 17.455: 99.3239% ( 1) 00:18:02.171 17.455 - 17.571: 99.3318% ( 1) 00:18:02.171 18.618 - 18.735: 99.3632% ( 4) 00:18:02.171 19.200 - 19.316: 99.3711% ( 1) 00:18:02.171 19.549 - 19.665: 99.3789% ( 1) 00:18:02.171 19.665 - 19.782: 99.3868% ( 1) 00:18:02.171 20.480 - 20.596: 99.3947% ( 1) 00:18:02.171 21.062 - 21.178: 99.4025% ( 1) 00:18:02.171 21.993 - 22.109: 99.4182% ( 2) 00:18:02.171 23.505 - 23.622: 99.4261% ( 1) 00:18:02.171 24.204 - 24.320: 99.4340% ( 1) 00:18:02.171 25.367 - 25.484: 99.4418% ( 1) 00:18:02.171 25.600 - 25.716: 99.4497% ( 1) 00:18:02.171 25.949 - 26.065: 99.4575% ( 1) 00:18:02.171 26.880 - 26.996: 99.4654% ( 1) 00:18:02.171 28.742 - 28.858: 99.4733% ( 1) 00:18:02.171 30.022 - 30.255: 99.4811% ( 1) 00:18:02.171 41.425 - 41.658: 99.4890% ( 1) 00:18:02.171 3991.738 - 4021.527: 99.5283% ( 5) 00:18:02.171 4021.527 - 4051.316: 99.9764% ( 57) 00:18:02.171 4051.316 - 4081.105: 100.0000% ( 3) 00:18:02.171 00:18:02.171 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:02.171 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:02.171 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:02.171 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:02.171 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:02.171 [ 00:18:02.171 { 00:18:02.171 "allow_any_host": true, 00:18:02.171 "hosts": [], 00:18:02.171 "listen_addresses": [], 00:18:02.171 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:02.171 "subtype": "Discovery" 00:18:02.171 }, 00:18:02.171 { 00:18:02.171 "allow_any_host": true, 00:18:02.171 "hosts": [], 00:18:02.171 "listen_addresses": [ 00:18:02.171 { 00:18:02.171 "adrfam": "IPv4", 00:18:02.171 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:02.171 "trsvcid": "0", 00:18:02.171 "trtype": "VFIOUSER" 00:18:02.171 } 00:18:02.171 ], 00:18:02.171 "max_cntlid": 65519, 00:18:02.171 "max_namespaces": 32, 00:18:02.171 "min_cntlid": 1, 00:18:02.171 "model_number": "SPDK bdev Controller", 00:18:02.171 "namespaces": [ 00:18:02.171 { 00:18:02.171 "bdev_name": "Malloc1", 00:18:02.171 "name": "Malloc1", 00:18:02.171 "nguid": "77C0C095BDAF41499ADE1DE2D5626C91", 00:18:02.171 "nsid": 1, 00:18:02.171 "uuid": "77c0c095-bdaf-4149-9ade-1de2d5626c91" 00:18:02.171 }, 00:18:02.171 { 00:18:02.171 "bdev_name": "Malloc3", 00:18:02.171 "name": "Malloc3", 00:18:02.171 "nguid": "05BB187D1A884466A1F88684DEFF5A53", 00:18:02.171 "nsid": 2, 00:18:02.171 "uuid": "05bb187d-1a88-4466-a1f8-8684deff5a53" 00:18:02.171 } 00:18:02.171 ], 00:18:02.171 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:02.171 "serial_number": "SPDK1", 00:18:02.171 "subtype": "NVMe" 00:18:02.171 }, 00:18:02.171 { 00:18:02.171 "allow_any_host": true, 00:18:02.171 "hosts": [], 00:18:02.171 "listen_addresses": [ 00:18:02.171 { 00:18:02.171 "adrfam": "IPv4", 00:18:02.171 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:02.171 "trsvcid": "0", 00:18:02.171 "trtype": "VFIOUSER" 00:18:02.171 } 00:18:02.171 ], 00:18:02.171 "max_cntlid": 65519, 00:18:02.171 "max_namespaces": 32, 00:18:02.171 "min_cntlid": 1, 00:18:02.171 "model_number": "SPDK bdev Controller", 00:18:02.171 "namespaces": [ 00:18:02.171 { 00:18:02.171 "bdev_name": "Malloc2", 00:18:02.171 "name": "Malloc2", 00:18:02.171 "nguid": "C5D629493B944BF088EC0F2A986FDAFA", 00:18:02.171 "nsid": 1, 00:18:02.171 "uuid": "c5d62949-3b94-4bf0-88ec-0f2a986fdafa" 00:18:02.171 } 00:18:02.171 ], 00:18:02.171 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:02.171 "serial_number": "SPDK2", 00:18:02.171 "subtype": "NVMe" 00:18:02.171 } 00:18:02.171 ] 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=93442 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:18:02.429 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=3 00:18:02.429 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:02.429 [2024-11-20 17:38:44.205258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:02.688 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:02.688 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:02.688 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:02.688 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:02.688 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:03.254 Malloc4 00:18:03.254 17:38:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:03.511 [2024-11-20 17:38:45.241152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.511 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:03.511 Asynchronous Event Request test 00:18:03.511 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:03.511 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:03.511 Registering asynchronous event callbacks... 00:18:03.511 Starting namespace attribute notice tests for all controllers... 00:18:03.511 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:03.511 aer_cb - Changed Namespace 00:18:03.511 Cleaning up... 00:18:04.079 [ 00:18:04.079 { 00:18:04.079 "allow_any_host": true, 00:18:04.079 "hosts": [], 00:18:04.079 "listen_addresses": [], 00:18:04.079 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:04.079 "subtype": "Discovery" 00:18:04.079 }, 00:18:04.079 { 00:18:04.079 "allow_any_host": true, 00:18:04.079 "hosts": [], 00:18:04.079 "listen_addresses": [ 00:18:04.079 { 00:18:04.079 "adrfam": "IPv4", 00:18:04.079 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:04.079 "trsvcid": "0", 00:18:04.079 "trtype": "VFIOUSER" 00:18:04.079 } 00:18:04.079 ], 00:18:04.079 "max_cntlid": 65519, 00:18:04.079 "max_namespaces": 32, 00:18:04.079 "min_cntlid": 1, 00:18:04.079 "model_number": "SPDK bdev Controller", 00:18:04.079 "namespaces": [ 00:18:04.079 { 00:18:04.079 "bdev_name": "Malloc1", 00:18:04.079 "name": "Malloc1", 00:18:04.079 "nguid": "77C0C095BDAF41499ADE1DE2D5626C91", 00:18:04.079 "nsid": 1, 00:18:04.079 "uuid": "77c0c095-bdaf-4149-9ade-1de2d5626c91" 00:18:04.079 }, 00:18:04.079 { 00:18:04.079 "bdev_name": "Malloc3", 00:18:04.079 "name": "Malloc3", 00:18:04.079 "nguid": "05BB187D1A884466A1F88684DEFF5A53", 00:18:04.079 "nsid": 2, 00:18:04.079 "uuid": "05bb187d-1a88-4466-a1f8-8684deff5a53" 00:18:04.079 } 00:18:04.079 ], 00:18:04.079 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:04.079 "serial_number": "SPDK1", 00:18:04.079 "subtype": "NVMe" 00:18:04.079 }, 00:18:04.079 { 00:18:04.079 "allow_any_host": true, 00:18:04.079 "hosts": [], 00:18:04.079 "listen_addresses": [ 00:18:04.079 { 00:18:04.079 "adrfam": "IPv4", 00:18:04.079 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:04.079 "trsvcid": "0", 00:18:04.079 "trtype": "VFIOUSER" 00:18:04.079 } 00:18:04.079 ], 00:18:04.079 "max_cntlid": 65519, 00:18:04.079 "max_namespaces": 32, 00:18:04.079 "min_cntlid": 1, 00:18:04.079 "model_number": "SPDK bdev Controller", 00:18:04.079 "namespaces": [ 00:18:04.079 { 00:18:04.079 "bdev_name": "Malloc2", 00:18:04.079 "name": "Malloc2", 00:18:04.079 "nguid": "C5D629493B944BF088EC0F2A986FDAFA", 00:18:04.079 "nsid": 1, 00:18:04.079 "uuid": "c5d62949-3b94-4bf0-88ec-0f2a986fdafa" 00:18:04.079 }, 00:18:04.079 { 00:18:04.079 "bdev_name": "Malloc4", 00:18:04.079 "name": "Malloc4", 00:18:04.079 "nguid": "387B0F9E82924433B726246FC7165B3D", 00:18:04.079 "nsid": 2, 00:18:04.079 "uuid": "387b0f9e-8292-4433-b726-246fc7165b3d" 00:18:04.079 } 00:18:04.079 ], 00:18:04.079 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:04.079 "serial_number": "SPDK2", 00:18:04.079 "subtype": "NVMe" 00:18:04.079 } 00:18:04.079 ] 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 93442 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 92741 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 92741 ']' 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 92741 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92741 00:18:04.079 killing process with pid 92741 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92741' 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 92741 00:18:04.079 17:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 92741 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:04.647 Process pid: 93492 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=93492 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 93492' 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 93492 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 93492 ']' 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.647 17:38:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:04.647 [2024-11-20 17:38:46.210484] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:04.647 [2024-11-20 17:38:46.212351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:04.647 [2024-11-20 17:38:46.212608] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.647 [2024-11-20 17:38:46.354254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.906 [2024-11-20 17:38:46.502317] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.906 [2024-11-20 17:38:46.502414] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.906 [2024-11-20 17:38:46.502426] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.906 [2024-11-20 17:38:46.502435] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.906 [2024-11-20 17:38:46.502443] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.906 [2024-11-20 17:38:46.502556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.906 [2024-11-20 17:38:46.503386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.906 [2024-11-20 17:38:46.503488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.906 [2024-11-20 17:38:46.503498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.906 [2024-11-20 17:38:46.664508] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:04.906 [2024-11-20 17:38:46.664778] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:04.906 [2024-11-20 17:38:46.665811] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:04.906 [2024-11-20 17:38:46.666003] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:04.906 [2024-11-20 17:38:46.666276] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:05.840 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.840 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:05.840 17:38:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:06.773 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:07.030 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:07.030 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:07.030 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:07.030 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:07.030 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:07.288 Malloc1 00:18:07.547 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:07.804 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:08.063 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:08.719 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:08.719 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:08.719 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:08.976 Malloc2 00:18:08.976 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:09.543 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:09.801 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 93492 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 93492 ']' 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 93492 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93492 00:18:10.059 killing process with pid 93492 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93492' 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 93492 00:18:10.059 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 93492 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:10.625 00:18:10.625 real 0m59.009s 00:18:10.625 user 3m44.332s 00:18:10.625 sys 0m4.966s 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:10.625 ************************************ 00:18:10.625 END TEST nvmf_vfio_user 00:18:10.625 ************************************ 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.625 ************************************ 00:18:10.625 START TEST nvmf_vfio_user_nvme_compliance 00:18:10.625 ************************************ 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:10.625 * Looking for test storage... 00:18:10.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.625 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.626 --rc genhtml_branch_coverage=1 00:18:10.626 --rc genhtml_function_coverage=1 00:18:10.626 --rc genhtml_legend=1 00:18:10.626 --rc geninfo_all_blocks=1 00:18:10.626 --rc geninfo_unexecuted_blocks=1 00:18:10.626 00:18:10.626 ' 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.626 --rc genhtml_branch_coverage=1 00:18:10.626 --rc genhtml_function_coverage=1 00:18:10.626 --rc genhtml_legend=1 00:18:10.626 --rc geninfo_all_blocks=1 00:18:10.626 --rc geninfo_unexecuted_blocks=1 00:18:10.626 00:18:10.626 ' 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.626 --rc genhtml_branch_coverage=1 00:18:10.626 --rc genhtml_function_coverage=1 00:18:10.626 --rc genhtml_legend=1 00:18:10.626 --rc geninfo_all_blocks=1 00:18:10.626 --rc geninfo_unexecuted_blocks=1 00:18:10.626 00:18:10.626 ' 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.626 --rc genhtml_branch_coverage=1 00:18:10.626 --rc genhtml_function_coverage=1 00:18:10.626 --rc genhtml_legend=1 00:18:10.626 --rc geninfo_all_blocks=1 00:18:10.626 --rc geninfo_unexecuted_blocks=1 00:18:10.626 00:18:10.626 ' 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.626 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.884 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.884 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.885 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=93702 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:10.885 Process pid: 93702 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 93702' 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 93702 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 93702 ']' 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.885 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:10.885 [2024-11-20 17:38:52.515062] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:10.885 [2024-11-20 17:38:52.515216] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.885 [2024-11-20 17:38:52.653923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:11.144 [2024-11-20 17:38:52.804402] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.144 [2024-11-20 17:38:52.804505] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.144 [2024-11-20 17:38:52.804517] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.144 [2024-11-20 17:38:52.804528] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.144 [2024-11-20 17:38:52.804536] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.144 [2024-11-20 17:38:52.804719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.144 [2024-11-20 17:38:52.804780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.144 [2024-11-20 17:38:52.804785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.079 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.079 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:12.079 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:13.059 malloc0 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.059 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:13.318 00:18:13.318 00:18:13.318 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.318 http://cunit.sourceforge.net/ 00:18:13.318 00:18:13.318 00:18:13.318 Suite: nvme_compliance 00:18:13.318 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 17:38:55.061869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.318 [2024-11-20 17:38:55.063560] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:13.318 [2024-11-20 17:38:55.063591] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:13.319 [2024-11-20 17:38:55.063603] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:13.319 [2024-11-20 17:38:55.066933] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.319 passed 00:18:13.577 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 17:38:55.161089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.577 [2024-11-20 17:38:55.164142] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.577 passed 00:18:13.577 Test: admin_identify_ns ...[2024-11-20 17:38:55.260756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.577 [2024-11-20 17:38:55.320172] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:13.577 [2024-11-20 17:38:55.328177] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:13.577 [2024-11-20 17:38:55.349514] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.835 passed 00:18:13.835 Test: admin_get_features_mandatory_features ...[2024-11-20 17:38:55.444332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.835 [2024-11-20 17:38:55.447363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.835 passed 00:18:13.835 Test: admin_get_features_optional_features ...[2024-11-20 17:38:55.542553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.835 [2024-11-20 17:38:55.545593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.835 passed 00:18:14.093 Test: admin_set_features_number_of_queues ...[2024-11-20 17:38:55.636513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.093 [2024-11-20 17:38:55.742531] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:14.093 passed 00:18:14.093 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 17:38:55.832470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.093 [2024-11-20 17:38:55.835527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:14.093 passed 00:18:14.352 Test: admin_get_log_page_with_lpo ...[2024-11-20 17:38:55.928826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.352 [2024-11-20 17:38:55.998222] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:14.352 [2024-11-20 17:38:56.011514] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:14.352 passed 00:18:14.352 Test: fabric_property_get ...[2024-11-20 17:38:56.103493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.352 [2024-11-20 17:38:56.104986] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:14.352 [2024-11-20 17:38:56.106555] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:14.610 passed 00:18:14.610 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 17:38:56.202717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.610 [2024-11-20 17:38:56.204246] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:14.610 [2024-11-20 17:38:56.205763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:14.610 passed 00:18:14.610 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 17:38:56.297807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.610 [2024-11-20 17:38:56.380107] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:14.610 [2024-11-20 17:38:56.397125] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:14.868 [2024-11-20 17:38:56.402401] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:14.868 passed 00:18:14.868 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 17:38:56.494784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.868 [2024-11-20 17:38:56.496247] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:14.868 [2024-11-20 17:38:56.497817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:14.868 passed 00:18:14.868 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 17:38:56.590367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.127 [2024-11-20 17:38:56.664105] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:15.127 [2024-11-20 17:38:56.688183] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:15.127 [2024-11-20 17:38:56.696610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.127 passed 00:18:15.127 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 17:38:56.785603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.127 [2024-11-20 17:38:56.787050] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:15.127 [2024-11-20 17:38:56.787132] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:15.127 [2024-11-20 17:38:56.788629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.127 passed 00:18:15.127 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 17:38:56.880088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.385 [2024-11-20 17:38:56.972103] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:15.385 [2024-11-20 17:38:56.980105] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:15.385 [2024-11-20 17:38:56.988106] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:15.385 [2024-11-20 17:38:56.996099] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:15.385 [2024-11-20 17:38:57.025458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.385 passed 00:18:15.385 Test: admin_create_io_sq_verify_pc ...[2024-11-20 17:38:57.115444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.385 [2024-11-20 17:38:57.133148] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:15.385 [2024-11-20 17:38:57.151182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.644 passed 00:18:15.644 Test: admin_create_io_qp_max_qps ...[2024-11-20 17:38:57.244161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.578 [2024-11-20 17:38:58.355151] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:17.145 [2024-11-20 17:38:58.742914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.145 passed 00:18:17.145 Test: admin_create_io_sq_shared_cq ...[2024-11-20 17:38:58.835712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.405 [2024-11-20 17:38:58.968113] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:17.405 [2024-11-20 17:38:59.005303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.405 passed 00:18:17.405 00:18:17.405 Run Summary: Type Total Ran Passed Failed Inactive 00:18:17.405 suites 1 1 n/a 0 0 00:18:17.405 tests 18 18 18 0 0 00:18:17.405 asserts 360 360 360 0 n/a 00:18:17.405 00:18:17.405 Elapsed time = 1.645 seconds 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 93702 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 93702 ']' 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 93702 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93702 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.405 killing process with pid 93702 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93702' 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 93702 00:18:17.405 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 93702 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:17.971 00:18:17.971 real 0m7.462s 00:18:17.971 user 0m20.410s 00:18:17.971 sys 0m0.736s 00:18:17.971 ************************************ 00:18:17.971 END TEST nvmf_vfio_user_nvme_compliance 00:18:17.971 ************************************ 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:17.971 ************************************ 00:18:17.971 START TEST nvmf_vfio_user_fuzz 00:18:17.971 ************************************ 00:18:17.971 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:18.229 * Looking for test storage... 00:18:18.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.229 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.230 --rc genhtml_branch_coverage=1 00:18:18.230 --rc genhtml_function_coverage=1 00:18:18.230 --rc genhtml_legend=1 00:18:18.230 --rc geninfo_all_blocks=1 00:18:18.230 --rc geninfo_unexecuted_blocks=1 00:18:18.230 00:18:18.230 ' 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.230 --rc genhtml_branch_coverage=1 00:18:18.230 --rc genhtml_function_coverage=1 00:18:18.230 --rc genhtml_legend=1 00:18:18.230 --rc geninfo_all_blocks=1 00:18:18.230 --rc geninfo_unexecuted_blocks=1 00:18:18.230 00:18:18.230 ' 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.230 --rc genhtml_branch_coverage=1 00:18:18.230 --rc genhtml_function_coverage=1 00:18:18.230 --rc genhtml_legend=1 00:18:18.230 --rc geninfo_all_blocks=1 00:18:18.230 --rc geninfo_unexecuted_blocks=1 00:18:18.230 00:18:18.230 ' 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.230 --rc genhtml_branch_coverage=1 00:18:18.230 --rc genhtml_function_coverage=1 00:18:18.230 --rc genhtml_legend=1 00:18:18.230 --rc geninfo_all_blocks=1 00:18:18.230 --rc geninfo_unexecuted_blocks=1 00:18:18.230 00:18:18.230 ' 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.230 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.231 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=93867 00:18:18.231 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 93867' 00:18:18.231 Process pid: 93867 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 93867 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 93867 ']' 00:18:18.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.231 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:19.606 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.606 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:19.606 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.626 malloc0 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:20.626 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:21.193 Shutting down the fuzz application 00:18:21.193 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:21.193 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.193 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.193 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.193 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 93867 00:18:21.193 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 93867 ']' 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 93867 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93867 00:18:21.194 killing process with pid 93867 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93867' 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 93867 00:18:21.194 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 93867 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:21.759 00:18:21.759 real 0m3.627s 00:18:21.759 user 0m3.840s 00:18:21.759 sys 0m0.773s 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.759 ************************************ 00:18:21.759 END TEST nvmf_vfio_user_fuzz 00:18:21.759 ************************************ 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.759 ************************************ 00:18:21.759 START TEST nvmf_auth_target 00:18:21.759 ************************************ 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:21.759 * Looking for test storage... 00:18:21.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:18:21.759 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.018 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.019 --rc genhtml_branch_coverage=1 00:18:22.019 --rc genhtml_function_coverage=1 00:18:22.019 --rc genhtml_legend=1 00:18:22.019 --rc geninfo_all_blocks=1 00:18:22.019 --rc geninfo_unexecuted_blocks=1 00:18:22.019 00:18:22.019 ' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.019 --rc genhtml_branch_coverage=1 00:18:22.019 --rc genhtml_function_coverage=1 00:18:22.019 --rc genhtml_legend=1 00:18:22.019 --rc geninfo_all_blocks=1 00:18:22.019 --rc geninfo_unexecuted_blocks=1 00:18:22.019 00:18:22.019 ' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.019 --rc genhtml_branch_coverage=1 00:18:22.019 --rc genhtml_function_coverage=1 00:18:22.019 --rc genhtml_legend=1 00:18:22.019 --rc geninfo_all_blocks=1 00:18:22.019 --rc geninfo_unexecuted_blocks=1 00:18:22.019 00:18:22.019 ' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.019 --rc genhtml_branch_coverage=1 00:18:22.019 --rc genhtml_function_coverage=1 00:18:22.019 --rc genhtml_legend=1 00:18:22.019 --rc geninfo_all_blocks=1 00:18:22.019 --rc geninfo_unexecuted_blocks=1 00:18:22.019 00:18:22.019 ' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.019 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:22.019 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:22.020 Cannot find device "nvmf_init_br" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:22.020 Cannot find device "nvmf_init_br2" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:22.020 Cannot find device "nvmf_tgt_br" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.020 Cannot find device "nvmf_tgt_br2" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:22.020 Cannot find device "nvmf_init_br" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:22.020 Cannot find device "nvmf_init_br2" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:22.020 Cannot find device "nvmf_tgt_br" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:22.020 Cannot find device "nvmf_tgt_br2" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:22.020 Cannot find device "nvmf_br" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:22.020 Cannot find device "nvmf_init_if" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:22.020 Cannot find device "nvmf_init_if2" 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:18:22.020 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:22.279 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.279 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:22.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.182 ms 00:18:22.280 00:18:22.280 --- 10.0.0.3 ping statistics --- 00:18:22.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.280 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:22.280 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:22.280 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:18:22.280 00:18:22.280 --- 10.0.0.4 ping statistics --- 00:18:22.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.280 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:18:22.280 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:18:22.538 00:18:22.538 --- 10.0.0.1 ping statistics --- 00:18:22.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.538 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:22.538 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:22.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:22.538 00:18:22.538 --- 10.0.0.2 ping statistics --- 00:18:22.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.538 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:22.538 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.538 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=94129 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 94129 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 94129 ']' 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.539 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.473 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=94173 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=55d4b775bbbe598475491c5b235d6d886de8b2e92042012a 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.dEk 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 55d4b775bbbe598475491c5b235d6d886de8b2e92042012a 0 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 55d4b775bbbe598475491c5b235d6d886de8b2e92042012a 0 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=55d4b775bbbe598475491c5b235d6d886de8b2e92042012a 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:18:23.474 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.dEk 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.dEk 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.dEk 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7c14c4b3d5b6befff9a9a0188c01854d51214ddf327d67088149d54d0b55b48a 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.hdf 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7c14c4b3d5b6befff9a9a0188c01854d51214ddf327d67088149d54d0b55b48a 3 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7c14c4b3d5b6befff9a9a0188c01854d51214ddf327d67088149d54d0b55b48a 3 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7c14c4b3d5b6befff9a9a0188c01854d51214ddf327d67088149d54d0b55b48a 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.hdf 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.hdf 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.hdf 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=055cb484941ea72e87d5924f42770bd9 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.BLt 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 055cb484941ea72e87d5924f42770bd9 1 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 055cb484941ea72e87d5924f42770bd9 1 00:18:23.733 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=055cb484941ea72e87d5924f42770bd9 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.BLt 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.BLt 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BLt 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=27fec450c5afd2717e8651ed6abc09431c3f7761651a005a 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.A4z 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 27fec450c5afd2717e8651ed6abc09431c3f7761651a005a 2 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 27fec450c5afd2717e8651ed6abc09431c3f7761651a005a 2 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=27fec450c5afd2717e8651ed6abc09431c3f7761651a005a 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.A4z 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.A4z 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.A4z 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=30c6e28babed61a792f675d208071e3e26c12c993c0bd866 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.nDd 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 30c6e28babed61a792f675d208071e3e26c12c993c0bd866 2 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 30c6e28babed61a792f675d208071e3e26c12c993c0bd866 2 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=30c6e28babed61a792f675d208071e3e26c12c993c0bd866 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:23.734 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.nDd 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.nDd 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.nDd 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a040dfc45b1c7a3690a4a66b23e5e922 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.qVj 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a040dfc45b1c7a3690a4a66b23e5e922 1 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a040dfc45b1c7a3690a4a66b23e5e922 1 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a040dfc45b1c7a3690a4a66b23e5e922 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.qVj 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.qVj 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.qVj 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=41cfda778264b77fc184ad5526a87d5fdd445099b6c1d2f6edc8ebb6aeb67060 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.EYM 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 41cfda778264b77fc184ad5526a87d5fdd445099b6c1d2f6edc8ebb6aeb67060 3 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 41cfda778264b77fc184ad5526a87d5fdd445099b6c1d2f6edc8ebb6aeb67060 3 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=41cfda778264b77fc184ad5526a87d5fdd445099b6c1d2f6edc8ebb6aeb67060 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.EYM 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.EYM 00:18:23.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.EYM 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 94129 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 94129 ']' 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.993 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.251 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.251 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:24.251 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 94173 /var/tmp/host.sock 00:18:24.251 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 94173 ']' 00:18:24.251 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:24.251 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.251 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:24.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:24.252 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.252 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dEk 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.dEk 00:18:24.818 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.dEk 00:18:25.076 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.hdf ]] 00:18:25.076 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdf 00:18:25.076 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.076 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.076 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.076 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdf 00:18:25.076 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdf 00:18:25.335 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:25.335 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BLt 00:18:25.335 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.335 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.335 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.335 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BLt 00:18:25.335 17:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BLt 00:18:25.593 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.A4z ]] 00:18:25.593 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A4z 00:18:25.593 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.593 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.593 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.593 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A4z 00:18:25.593 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A4z 00:18:25.851 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:25.851 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nDd 00:18:25.851 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.851 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.110 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.110 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nDd 00:18:26.110 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nDd 00:18:26.368 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.qVj ]] 00:18:26.368 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qVj 00:18:26.368 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.368 17:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.368 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.368 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qVj 00:18:26.368 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qVj 00:18:26.626 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:26.626 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EYM 00:18:26.626 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.626 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.626 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.626 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EYM 00:18:26.626 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EYM 00:18:26.885 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:26.885 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:26.885 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.885 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.885 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.885 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.452 17:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.711 00:18:27.711 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.711 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.711 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.969 { 00:18:27.969 "auth": { 00:18:27.969 "dhgroup": "null", 00:18:27.969 "digest": "sha256", 00:18:27.969 "state": "completed" 00:18:27.969 }, 00:18:27.969 "cntlid": 1, 00:18:27.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:27.969 "listen_address": { 00:18:27.969 "adrfam": "IPv4", 00:18:27.969 "traddr": "10.0.0.3", 00:18:27.969 "trsvcid": "4420", 00:18:27.969 "trtype": "TCP" 00:18:27.969 }, 00:18:27.969 "peer_address": { 00:18:27.969 "adrfam": "IPv4", 00:18:27.969 "traddr": "10.0.0.1", 00:18:27.969 "trsvcid": "48192", 00:18:27.969 "trtype": "TCP" 00:18:27.969 }, 00:18:27.969 "qid": 0, 00:18:27.969 "state": "enabled", 00:18:27.969 "thread": "nvmf_tgt_poll_group_000" 00:18:27.969 } 00:18:27.969 ]' 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.969 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.228 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.228 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.228 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.228 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.228 17:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.487 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:18:28.487 17:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.761 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.761 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.761 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.327 { 00:18:34.327 "auth": { 00:18:34.327 "dhgroup": "null", 00:18:34.327 "digest": "sha256", 00:18:34.327 "state": "completed" 00:18:34.327 }, 00:18:34.327 "cntlid": 3, 00:18:34.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:34.327 "listen_address": { 00:18:34.327 "adrfam": "IPv4", 00:18:34.327 "traddr": "10.0.0.3", 00:18:34.327 "trsvcid": "4420", 00:18:34.327 "trtype": "TCP" 00:18:34.327 }, 00:18:34.327 "peer_address": { 00:18:34.327 "adrfam": "IPv4", 00:18:34.327 "traddr": "10.0.0.1", 00:18:34.327 "trsvcid": "55790", 00:18:34.327 "trtype": "TCP" 00:18:34.327 }, 00:18:34.327 "qid": 0, 00:18:34.327 "state": "enabled", 00:18:34.327 "thread": "nvmf_tgt_poll_group_000" 00:18:34.327 } 00:18:34.327 ]' 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:34.327 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.327 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.327 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.327 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.895 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:18:34.895 17:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:18:35.462 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.463 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:35.463 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.463 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.463 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.463 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.463 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.463 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.031 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.289 00:18:36.289 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.289 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.289 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.618 { 00:18:36.618 "auth": { 00:18:36.618 "dhgroup": "null", 00:18:36.618 "digest": "sha256", 00:18:36.618 "state": "completed" 00:18:36.618 }, 00:18:36.618 "cntlid": 5, 00:18:36.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:36.618 "listen_address": { 00:18:36.618 "adrfam": "IPv4", 00:18:36.618 "traddr": "10.0.0.3", 00:18:36.618 "trsvcid": "4420", 00:18:36.618 "trtype": "TCP" 00:18:36.618 }, 00:18:36.618 "peer_address": { 00:18:36.618 "adrfam": "IPv4", 00:18:36.618 "traddr": "10.0.0.1", 00:18:36.618 "trsvcid": "55822", 00:18:36.618 "trtype": "TCP" 00:18:36.618 }, 00:18:36.618 "qid": 0, 00:18:36.618 "state": "enabled", 00:18:36.618 "thread": "nvmf_tgt_poll_group_000" 00:18:36.618 } 00:18:36.618 ]' 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.618 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.877 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:36.877 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.877 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.877 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.877 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.135 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:18:37.135 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:37.700 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.267 17:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.525 00:18:38.525 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.525 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.525 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.093 { 00:18:39.093 "auth": { 00:18:39.093 "dhgroup": "null", 00:18:39.093 "digest": "sha256", 00:18:39.093 "state": "completed" 00:18:39.093 }, 00:18:39.093 "cntlid": 7, 00:18:39.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:39.093 "listen_address": { 00:18:39.093 "adrfam": "IPv4", 00:18:39.093 "traddr": "10.0.0.3", 00:18:39.093 "trsvcid": "4420", 00:18:39.093 "trtype": "TCP" 00:18:39.093 }, 00:18:39.093 "peer_address": { 00:18:39.093 "adrfam": "IPv4", 00:18:39.093 "traddr": "10.0.0.1", 00:18:39.093 "trsvcid": "53180", 00:18:39.093 "trtype": "TCP" 00:18:39.093 }, 00:18:39.093 "qid": 0, 00:18:39.093 "state": "enabled", 00:18:39.093 "thread": "nvmf_tgt_poll_group_000" 00:18:39.093 } 00:18:39.093 ]' 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.093 17:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.350 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:18:39.350 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.915 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.480 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.481 17:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.737 00:18:40.737 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.737 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.737 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.996 { 00:18:40.996 "auth": { 00:18:40.996 "dhgroup": "ffdhe2048", 00:18:40.996 "digest": "sha256", 00:18:40.996 "state": "completed" 00:18:40.996 }, 00:18:40.996 "cntlid": 9, 00:18:40.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:40.996 "listen_address": { 00:18:40.996 "adrfam": "IPv4", 00:18:40.996 "traddr": "10.0.0.3", 00:18:40.996 "trsvcid": "4420", 00:18:40.996 "trtype": "TCP" 00:18:40.996 }, 00:18:40.996 "peer_address": { 00:18:40.996 "adrfam": "IPv4", 00:18:40.996 "traddr": "10.0.0.1", 00:18:40.996 "trsvcid": "53202", 00:18:40.996 "trtype": "TCP" 00:18:40.996 }, 00:18:40.996 "qid": 0, 00:18:40.996 "state": "enabled", 00:18:40.996 "thread": "nvmf_tgt_poll_group_000" 00:18:40.996 } 00:18:40.996 ]' 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.996 17:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.560 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:18:41.560 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.126 17:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.690 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.948 00:18:42.948 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.948 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.948 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.207 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.207 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.207 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.207 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.207 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.207 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.207 { 00:18:43.207 "auth": { 00:18:43.207 "dhgroup": "ffdhe2048", 00:18:43.207 "digest": "sha256", 00:18:43.207 "state": "completed" 00:18:43.207 }, 00:18:43.207 "cntlid": 11, 00:18:43.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:43.207 "listen_address": { 00:18:43.207 "adrfam": "IPv4", 00:18:43.207 "traddr": "10.0.0.3", 00:18:43.207 "trsvcid": "4420", 00:18:43.207 "trtype": "TCP" 00:18:43.207 }, 00:18:43.207 "peer_address": { 00:18:43.207 "adrfam": "IPv4", 00:18:43.207 "traddr": "10.0.0.1", 00:18:43.207 "trsvcid": "53218", 00:18:43.207 "trtype": "TCP" 00:18:43.207 }, 00:18:43.207 "qid": 0, 00:18:43.207 "state": "enabled", 00:18:43.207 "thread": "nvmf_tgt_poll_group_000" 00:18:43.207 } 00:18:43.207 ]' 00:18:43.207 17:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.465 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.465 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.465 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.465 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.465 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.465 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.465 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.031 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:18:44.031 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.597 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.163 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.164 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.422 00:18:45.422 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.422 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.422 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.680 { 00:18:45.680 "auth": { 00:18:45.680 "dhgroup": "ffdhe2048", 00:18:45.680 "digest": "sha256", 00:18:45.680 "state": "completed" 00:18:45.680 }, 00:18:45.680 "cntlid": 13, 00:18:45.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:45.680 "listen_address": { 00:18:45.680 "adrfam": "IPv4", 00:18:45.680 "traddr": "10.0.0.3", 00:18:45.680 "trsvcid": "4420", 00:18:45.680 "trtype": "TCP" 00:18:45.680 }, 00:18:45.680 "peer_address": { 00:18:45.680 "adrfam": "IPv4", 00:18:45.680 "traddr": "10.0.0.1", 00:18:45.680 "trsvcid": "53230", 00:18:45.680 "trtype": "TCP" 00:18:45.680 }, 00:18:45.680 "qid": 0, 00:18:45.680 "state": "enabled", 00:18:45.680 "thread": "nvmf_tgt_poll_group_000" 00:18:45.680 } 00:18:45.680 ]' 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.680 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.981 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.981 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.981 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.981 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.981 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.240 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:18:46.240 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.808 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.375 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.632 00:18:47.888 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.888 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.888 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.148 { 00:18:48.148 "auth": { 00:18:48.148 "dhgroup": "ffdhe2048", 00:18:48.148 "digest": "sha256", 00:18:48.148 "state": "completed" 00:18:48.148 }, 00:18:48.148 "cntlid": 15, 00:18:48.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:48.148 "listen_address": { 00:18:48.148 "adrfam": "IPv4", 00:18:48.148 "traddr": "10.0.0.3", 00:18:48.148 "trsvcid": "4420", 00:18:48.148 "trtype": "TCP" 00:18:48.148 }, 00:18:48.148 "peer_address": { 00:18:48.148 "adrfam": "IPv4", 00:18:48.148 "traddr": "10.0.0.1", 00:18:48.148 "trsvcid": "53252", 00:18:48.148 "trtype": "TCP" 00:18:48.148 }, 00:18:48.148 "qid": 0, 00:18:48.148 "state": "enabled", 00:18:48.148 "thread": "nvmf_tgt_poll_group_000" 00:18:48.148 } 00:18:48.148 ]' 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.148 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.406 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.406 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.406 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.663 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:18:48.664 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.231 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.798 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.056 00:18:50.056 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.056 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.056 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.625 { 00:18:50.625 "auth": { 00:18:50.625 "dhgroup": "ffdhe3072", 00:18:50.625 "digest": "sha256", 00:18:50.625 "state": "completed" 00:18:50.625 }, 00:18:50.625 "cntlid": 17, 00:18:50.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:50.625 "listen_address": { 00:18:50.625 "adrfam": "IPv4", 00:18:50.625 "traddr": "10.0.0.3", 00:18:50.625 "trsvcid": "4420", 00:18:50.625 "trtype": "TCP" 00:18:50.625 }, 00:18:50.625 "peer_address": { 00:18:50.625 "adrfam": "IPv4", 00:18:50.625 "traddr": "10.0.0.1", 00:18:50.625 "trsvcid": "35592", 00:18:50.625 "trtype": "TCP" 00:18:50.625 }, 00:18:50.625 "qid": 0, 00:18:50.625 "state": "enabled", 00:18:50.625 "thread": "nvmf_tgt_poll_group_000" 00:18:50.625 } 00:18:50.625 ]' 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.625 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.200 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:18:51.200 17:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:51.765 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.332 17:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.590 00:18:52.590 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.590 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.590 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.847 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.847 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.847 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.847 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.847 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.847 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.847 { 00:18:52.847 "auth": { 00:18:52.847 "dhgroup": "ffdhe3072", 00:18:52.847 "digest": "sha256", 00:18:52.847 "state": "completed" 00:18:52.847 }, 00:18:52.847 "cntlid": 19, 00:18:52.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:52.847 "listen_address": { 00:18:52.847 "adrfam": "IPv4", 00:18:52.847 "traddr": "10.0.0.3", 00:18:52.848 "trsvcid": "4420", 00:18:52.848 "trtype": "TCP" 00:18:52.848 }, 00:18:52.848 "peer_address": { 00:18:52.848 "adrfam": "IPv4", 00:18:52.848 "traddr": "10.0.0.1", 00:18:52.848 "trsvcid": "35624", 00:18:52.848 "trtype": "TCP" 00:18:52.848 }, 00:18:52.848 "qid": 0, 00:18:52.848 "state": "enabled", 00:18:52.848 "thread": "nvmf_tgt_poll_group_000" 00:18:52.848 } 00:18:52.848 ]' 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.106 17:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.671 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:18:53.671 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.238 17:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.496 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.062 00:18:55.062 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.062 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.062 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.320 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.320 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.320 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.320 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.320 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.320 17:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.320 { 00:18:55.320 "auth": { 00:18:55.320 "dhgroup": "ffdhe3072", 00:18:55.320 "digest": "sha256", 00:18:55.320 "state": "completed" 00:18:55.320 }, 00:18:55.320 "cntlid": 21, 00:18:55.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:55.320 "listen_address": { 00:18:55.320 "adrfam": "IPv4", 00:18:55.320 "traddr": "10.0.0.3", 00:18:55.320 "trsvcid": "4420", 00:18:55.320 "trtype": "TCP" 00:18:55.320 }, 00:18:55.320 "peer_address": { 00:18:55.320 "adrfam": "IPv4", 00:18:55.320 "traddr": "10.0.0.1", 00:18:55.320 "trsvcid": "35636", 00:18:55.320 "trtype": "TCP" 00:18:55.320 }, 00:18:55.320 "qid": 0, 00:18:55.320 "state": "enabled", 00:18:55.320 "thread": "nvmf_tgt_poll_group_000" 00:18:55.320 } 00:18:55.320 ]' 00:18:55.320 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.320 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.320 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.320 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.578 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.578 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.578 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.578 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.837 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:18:55.837 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.403 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.970 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.227 00:18:57.227 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.227 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.227 17:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.485 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.485 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.485 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.485 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.485 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.485 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.485 { 00:18:57.485 "auth": { 00:18:57.485 "dhgroup": "ffdhe3072", 00:18:57.485 "digest": "sha256", 00:18:57.485 "state": "completed" 00:18:57.485 }, 00:18:57.485 "cntlid": 23, 00:18:57.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:18:57.485 "listen_address": { 00:18:57.485 "adrfam": "IPv4", 00:18:57.485 "traddr": "10.0.0.3", 00:18:57.485 "trsvcid": "4420", 00:18:57.485 "trtype": "TCP" 00:18:57.485 }, 00:18:57.485 "peer_address": { 00:18:57.485 "adrfam": "IPv4", 00:18:57.485 "traddr": "10.0.0.1", 00:18:57.485 "trsvcid": "35656", 00:18:57.485 "trtype": "TCP" 00:18:57.485 }, 00:18:57.485 "qid": 0, 00:18:57.485 "state": "enabled", 00:18:57.485 "thread": "nvmf_tgt_poll_group_000" 00:18:57.485 } 00:18:57.485 ]' 00:18:57.485 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.742 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.742 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.742 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.742 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.742 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.742 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.742 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.000 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:18:58.000 17:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.935 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.193 17:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.451 00:18:59.451 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.451 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.451 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.048 { 00:19:00.048 "auth": { 00:19:00.048 "dhgroup": "ffdhe4096", 00:19:00.048 "digest": "sha256", 00:19:00.048 "state": "completed" 00:19:00.048 }, 00:19:00.048 "cntlid": 25, 00:19:00.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:00.048 "listen_address": { 00:19:00.048 "adrfam": "IPv4", 00:19:00.048 "traddr": "10.0.0.3", 00:19:00.048 "trsvcid": "4420", 00:19:00.048 "trtype": "TCP" 00:19:00.048 }, 00:19:00.048 "peer_address": { 00:19:00.048 "adrfam": "IPv4", 00:19:00.048 "traddr": "10.0.0.1", 00:19:00.048 "trsvcid": "38856", 00:19:00.048 "trtype": "TCP" 00:19:00.048 }, 00:19:00.048 "qid": 0, 00:19:00.048 "state": "enabled", 00:19:00.048 "thread": "nvmf_tgt_poll_group_000" 00:19:00.048 } 00:19:00.048 ]' 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.048 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.307 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:00.307 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.873 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.439 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.439 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.439 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.439 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.439 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.697 00:19:01.697 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.697 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.697 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.263 { 00:19:02.263 "auth": { 00:19:02.263 "dhgroup": "ffdhe4096", 00:19:02.263 "digest": "sha256", 00:19:02.263 "state": "completed" 00:19:02.263 }, 00:19:02.263 "cntlid": 27, 00:19:02.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:02.263 "listen_address": { 00:19:02.263 "adrfam": "IPv4", 00:19:02.263 "traddr": "10.0.0.3", 00:19:02.263 "trsvcid": "4420", 00:19:02.263 "trtype": "TCP" 00:19:02.263 }, 00:19:02.263 "peer_address": { 00:19:02.263 "adrfam": "IPv4", 00:19:02.263 "traddr": "10.0.0.1", 00:19:02.263 "trsvcid": "38876", 00:19:02.263 "trtype": "TCP" 00:19:02.263 }, 00:19:02.263 "qid": 0, 00:19:02.263 "state": "enabled", 00:19:02.263 "thread": "nvmf_tgt_poll_group_000" 00:19:02.263 } 00:19:02.263 ]' 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.263 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.829 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:02.829 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.396 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.655 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.316 00:19:04.316 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.316 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.316 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.576 { 00:19:04.576 "auth": { 00:19:04.576 "dhgroup": "ffdhe4096", 00:19:04.576 "digest": "sha256", 00:19:04.576 "state": "completed" 00:19:04.576 }, 00:19:04.576 "cntlid": 29, 00:19:04.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:04.576 "listen_address": { 00:19:04.576 "adrfam": "IPv4", 00:19:04.576 "traddr": "10.0.0.3", 00:19:04.576 "trsvcid": "4420", 00:19:04.576 "trtype": "TCP" 00:19:04.576 }, 00:19:04.576 "peer_address": { 00:19:04.576 "adrfam": "IPv4", 00:19:04.576 "traddr": "10.0.0.1", 00:19:04.576 "trsvcid": "38904", 00:19:04.576 "trtype": "TCP" 00:19:04.576 }, 00:19:04.576 "qid": 0, 00:19:04.576 "state": "enabled", 00:19:04.576 "thread": "nvmf_tgt_poll_group_000" 00:19:04.576 } 00:19:04.576 ]' 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.576 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.834 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:04.834 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.767 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.025 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.590 00:19:06.590 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.590 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.590 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.847 { 00:19:06.847 "auth": { 00:19:06.847 "dhgroup": "ffdhe4096", 00:19:06.847 "digest": "sha256", 00:19:06.847 "state": "completed" 00:19:06.847 }, 00:19:06.847 "cntlid": 31, 00:19:06.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:06.847 "listen_address": { 00:19:06.847 "adrfam": "IPv4", 00:19:06.847 "traddr": "10.0.0.3", 00:19:06.847 "trsvcid": "4420", 00:19:06.847 "trtype": "TCP" 00:19:06.847 }, 00:19:06.847 "peer_address": { 00:19:06.847 "adrfam": "IPv4", 00:19:06.847 "traddr": "10.0.0.1", 00:19:06.847 "trsvcid": "38932", 00:19:06.847 "trtype": "TCP" 00:19:06.847 }, 00:19:06.847 "qid": 0, 00:19:06.847 "state": "enabled", 00:19:06.847 "thread": "nvmf_tgt_poll_group_000" 00:19:06.847 } 00:19:06.847 ]' 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.847 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.412 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:07.412 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.976 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.542 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.800 00:19:08.800 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.800 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.800 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.114 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.114 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.114 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.114 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.386 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.386 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.386 { 00:19:09.386 "auth": { 00:19:09.386 "dhgroup": "ffdhe6144", 00:19:09.386 "digest": "sha256", 00:19:09.386 "state": "completed" 00:19:09.386 }, 00:19:09.386 "cntlid": 33, 00:19:09.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:09.386 "listen_address": { 00:19:09.386 "adrfam": "IPv4", 00:19:09.386 "traddr": "10.0.0.3", 00:19:09.386 "trsvcid": "4420", 00:19:09.386 "trtype": "TCP" 00:19:09.386 }, 00:19:09.386 "peer_address": { 00:19:09.386 "adrfam": "IPv4", 00:19:09.386 "traddr": "10.0.0.1", 00:19:09.386 "trsvcid": "54072", 00:19:09.386 "trtype": "TCP" 00:19:09.386 }, 00:19:09.386 "qid": 0, 00:19:09.386 "state": "enabled", 00:19:09.386 "thread": "nvmf_tgt_poll_group_000" 00:19:09.386 } 00:19:09.386 ]' 00:19:09.386 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.386 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.386 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.386 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.386 17:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.386 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.386 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.386 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.643 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:09.643 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:10.209 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.209 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:10.210 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.210 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.210 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.210 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.210 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.210 17:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.775 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.344 00:19:11.344 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.344 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.344 17:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.603 { 00:19:11.603 "auth": { 00:19:11.603 "dhgroup": "ffdhe6144", 00:19:11.603 "digest": "sha256", 00:19:11.603 "state": "completed" 00:19:11.603 }, 00:19:11.603 "cntlid": 35, 00:19:11.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:11.603 "listen_address": { 00:19:11.603 "adrfam": "IPv4", 00:19:11.603 "traddr": "10.0.0.3", 00:19:11.603 "trsvcid": "4420", 00:19:11.603 "trtype": "TCP" 00:19:11.603 }, 00:19:11.603 "peer_address": { 00:19:11.603 "adrfam": "IPv4", 00:19:11.603 "traddr": "10.0.0.1", 00:19:11.603 "trsvcid": "54096", 00:19:11.603 "trtype": "TCP" 00:19:11.603 }, 00:19:11.603 "qid": 0, 00:19:11.603 "state": "enabled", 00:19:11.603 "thread": "nvmf_tgt_poll_group_000" 00:19:11.603 } 00:19:11.603 ]' 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.603 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.861 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.861 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.861 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.119 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:12.119 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.685 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.943 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.540 00:19:13.540 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.541 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.541 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.799 { 00:19:13.799 "auth": { 00:19:13.799 "dhgroup": "ffdhe6144", 00:19:13.799 "digest": "sha256", 00:19:13.799 "state": "completed" 00:19:13.799 }, 00:19:13.799 "cntlid": 37, 00:19:13.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:13.799 "listen_address": { 00:19:13.799 "adrfam": "IPv4", 00:19:13.799 "traddr": "10.0.0.3", 00:19:13.799 "trsvcid": "4420", 00:19:13.799 "trtype": "TCP" 00:19:13.799 }, 00:19:13.799 "peer_address": { 00:19:13.799 "adrfam": "IPv4", 00:19:13.799 "traddr": "10.0.0.1", 00:19:13.799 "trsvcid": "54116", 00:19:13.799 "trtype": "TCP" 00:19:13.799 }, 00:19:13.799 "qid": 0, 00:19:13.799 "state": "enabled", 00:19:13.799 "thread": "nvmf_tgt_poll_group_000" 00:19:13.799 } 00:19:13.799 ]' 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.799 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.057 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.057 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.057 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.057 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.057 17:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.315 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:14.315 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.249 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.508 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.074 00:19:16.075 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.075 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.075 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.332 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.332 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.332 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.332 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.332 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.332 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.332 { 00:19:16.332 "auth": { 00:19:16.332 "dhgroup": "ffdhe6144", 00:19:16.332 "digest": "sha256", 00:19:16.332 "state": "completed" 00:19:16.332 }, 00:19:16.332 "cntlid": 39, 00:19:16.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:16.332 "listen_address": { 00:19:16.332 "adrfam": "IPv4", 00:19:16.332 "traddr": "10.0.0.3", 00:19:16.332 "trsvcid": "4420", 00:19:16.332 "trtype": "TCP" 00:19:16.332 }, 00:19:16.332 "peer_address": { 00:19:16.332 "adrfam": "IPv4", 00:19:16.332 "traddr": "10.0.0.1", 00:19:16.332 "trsvcid": "54146", 00:19:16.332 "trtype": "TCP" 00:19:16.332 }, 00:19:16.332 "qid": 0, 00:19:16.332 "state": "enabled", 00:19:16.332 "thread": "nvmf_tgt_poll_group_000" 00:19:16.332 } 00:19:16.332 ]' 00:19:16.332 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.332 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.332 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.332 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.332 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.591 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.591 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.591 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.849 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:16.849 17:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:17.413 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.413 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:17.413 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.413 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.669 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.669 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.669 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.669 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.669 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.926 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.860 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.860 { 00:19:18.860 "auth": { 00:19:18.860 "dhgroup": "ffdhe8192", 00:19:18.860 "digest": "sha256", 00:19:18.860 "state": "completed" 00:19:18.860 }, 00:19:18.860 "cntlid": 41, 00:19:18.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:18.860 "listen_address": { 00:19:18.860 "adrfam": "IPv4", 00:19:18.860 "traddr": "10.0.0.3", 00:19:18.860 "trsvcid": "4420", 00:19:18.860 "trtype": "TCP" 00:19:18.860 }, 00:19:18.860 "peer_address": { 00:19:18.860 "adrfam": "IPv4", 00:19:18.860 "traddr": "10.0.0.1", 00:19:18.860 "trsvcid": "59174", 00:19:18.860 "trtype": "TCP" 00:19:18.860 }, 00:19:18.860 "qid": 0, 00:19:18.860 "state": "enabled", 00:19:18.860 "thread": "nvmf_tgt_poll_group_000" 00:19:18.860 } 00:19:18.860 ]' 00:19:18.860 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.118 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.118 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.118 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.118 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.118 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.119 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.119 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.377 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:19.377 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.310 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.568 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.503 00:19:21.503 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.503 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.503 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.503 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.503 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.503 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.503 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.503 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.503 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.503 { 00:19:21.503 "auth": { 00:19:21.503 "dhgroup": "ffdhe8192", 00:19:21.503 "digest": "sha256", 00:19:21.503 "state": "completed" 00:19:21.503 }, 00:19:21.503 "cntlid": 43, 00:19:21.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:21.503 "listen_address": { 00:19:21.503 "adrfam": "IPv4", 00:19:21.503 "traddr": "10.0.0.3", 00:19:21.503 "trsvcid": "4420", 00:19:21.503 "trtype": "TCP" 00:19:21.503 }, 00:19:21.503 "peer_address": { 00:19:21.503 "adrfam": "IPv4", 00:19:21.503 "traddr": "10.0.0.1", 00:19:21.503 "trsvcid": "59190", 00:19:21.503 "trtype": "TCP" 00:19:21.503 }, 00:19:21.503 "qid": 0, 00:19:21.503 "state": "enabled", 00:19:21.503 "thread": "nvmf_tgt_poll_group_000" 00:19:21.503 } 00:19:21.503 ]' 00:19:21.503 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.760 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.760 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.760 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.760 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.760 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.760 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.760 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.325 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:22.325 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.891 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.149 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.081 00:19:24.081 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.081 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.081 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.339 { 00:19:24.339 "auth": { 00:19:24.339 "dhgroup": "ffdhe8192", 00:19:24.339 "digest": "sha256", 00:19:24.339 "state": "completed" 00:19:24.339 }, 00:19:24.339 "cntlid": 45, 00:19:24.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:24.339 "listen_address": { 00:19:24.339 "adrfam": "IPv4", 00:19:24.339 "traddr": "10.0.0.3", 00:19:24.339 "trsvcid": "4420", 00:19:24.339 "trtype": "TCP" 00:19:24.339 }, 00:19:24.339 "peer_address": { 00:19:24.339 "adrfam": "IPv4", 00:19:24.339 "traddr": "10.0.0.1", 00:19:24.339 "trsvcid": "59212", 00:19:24.339 "trtype": "TCP" 00:19:24.339 }, 00:19:24.339 "qid": 0, 00:19:24.339 "state": "enabled", 00:19:24.339 "thread": "nvmf_tgt_poll_group_000" 00:19:24.339 } 00:19:24.339 ]' 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.339 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.339 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.339 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.339 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.339 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.339 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.597 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:24.597 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.531 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.791 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.357 00:19:26.357 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.357 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.357 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.940 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.941 { 00:19:26.941 "auth": { 00:19:26.941 "dhgroup": "ffdhe8192", 00:19:26.941 "digest": "sha256", 00:19:26.941 "state": "completed" 00:19:26.941 }, 00:19:26.941 "cntlid": 47, 00:19:26.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:26.941 "listen_address": { 00:19:26.941 "adrfam": "IPv4", 00:19:26.941 "traddr": "10.0.0.3", 00:19:26.941 "trsvcid": "4420", 00:19:26.941 "trtype": "TCP" 00:19:26.941 }, 00:19:26.941 "peer_address": { 00:19:26.941 "adrfam": "IPv4", 00:19:26.941 "traddr": "10.0.0.1", 00:19:26.941 "trsvcid": "59246", 00:19:26.941 "trtype": "TCP" 00:19:26.941 }, 00:19:26.941 "qid": 0, 00:19:26.941 "state": "enabled", 00:19:26.941 "thread": "nvmf_tgt_poll_group_000" 00:19:26.941 } 00:19:26.941 ]' 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.941 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.199 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:27.199 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.149 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.408 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.408 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.408 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.408 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.667 00:19:28.667 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.667 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.667 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.925 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.925 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.925 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.925 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.925 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.925 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.925 { 00:19:28.925 "auth": { 00:19:28.925 "dhgroup": "null", 00:19:28.925 "digest": "sha384", 00:19:28.925 "state": "completed" 00:19:28.925 }, 00:19:28.925 "cntlid": 49, 00:19:28.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:28.925 "listen_address": { 00:19:28.925 "adrfam": "IPv4", 00:19:28.925 "traddr": "10.0.0.3", 00:19:28.925 "trsvcid": "4420", 00:19:28.925 "trtype": "TCP" 00:19:28.925 }, 00:19:28.925 "peer_address": { 00:19:28.925 "adrfam": "IPv4", 00:19:28.925 "traddr": "10.0.0.1", 00:19:28.925 "trsvcid": "53584", 00:19:28.925 "trtype": "TCP" 00:19:28.925 }, 00:19:28.925 "qid": 0, 00:19:28.925 "state": "enabled", 00:19:28.925 "thread": "nvmf_tgt_poll_group_000" 00:19:28.925 } 00:19:28.925 ]' 00:19:28.925 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.184 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.184 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.184 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.184 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.184 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.184 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.184 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.442 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:29.442 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.377 17:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.377 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.378 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.944 00:19:30.944 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.944 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.944 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.203 { 00:19:31.203 "auth": { 00:19:31.203 "dhgroup": "null", 00:19:31.203 "digest": "sha384", 00:19:31.203 "state": "completed" 00:19:31.203 }, 00:19:31.203 "cntlid": 51, 00:19:31.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:31.203 "listen_address": { 00:19:31.203 "adrfam": "IPv4", 00:19:31.203 "traddr": "10.0.0.3", 00:19:31.203 "trsvcid": "4420", 00:19:31.203 "trtype": "TCP" 00:19:31.203 }, 00:19:31.203 "peer_address": { 00:19:31.203 "adrfam": "IPv4", 00:19:31.203 "traddr": "10.0.0.1", 00:19:31.203 "trsvcid": "53598", 00:19:31.203 "trtype": "TCP" 00:19:31.203 }, 00:19:31.203 "qid": 0, 00:19:31.203 "state": "enabled", 00:19:31.203 "thread": "nvmf_tgt_poll_group_000" 00:19:31.203 } 00:19:31.203 ]' 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.203 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.462 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.462 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.462 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.721 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:31.721 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:32.288 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.288 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:32.288 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.288 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.288 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.288 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.288 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.288 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.856 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.116 00:19:33.116 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.116 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.116 17:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.374 { 00:19:33.374 "auth": { 00:19:33.374 "dhgroup": "null", 00:19:33.374 "digest": "sha384", 00:19:33.374 "state": "completed" 00:19:33.374 }, 00:19:33.374 "cntlid": 53, 00:19:33.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:33.374 "listen_address": { 00:19:33.374 "adrfam": "IPv4", 00:19:33.374 "traddr": "10.0.0.3", 00:19:33.374 "trsvcid": "4420", 00:19:33.374 "trtype": "TCP" 00:19:33.374 }, 00:19:33.374 "peer_address": { 00:19:33.374 "adrfam": "IPv4", 00:19:33.374 "traddr": "10.0.0.1", 00:19:33.374 "trsvcid": "53614", 00:19:33.374 "trtype": "TCP" 00:19:33.374 }, 00:19:33.374 "qid": 0, 00:19:33.374 "state": "enabled", 00:19:33.374 "thread": "nvmf_tgt_poll_group_000" 00:19:33.374 } 00:19:33.374 ]' 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:33.374 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.633 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.633 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.633 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.892 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:33.892 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.461 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.028 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.287 00:19:35.287 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.287 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.287 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.546 { 00:19:35.546 "auth": { 00:19:35.546 "dhgroup": "null", 00:19:35.546 "digest": "sha384", 00:19:35.546 "state": "completed" 00:19:35.546 }, 00:19:35.546 "cntlid": 55, 00:19:35.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:35.546 "listen_address": { 00:19:35.546 "adrfam": "IPv4", 00:19:35.546 "traddr": "10.0.0.3", 00:19:35.546 "trsvcid": "4420", 00:19:35.546 "trtype": "TCP" 00:19:35.546 }, 00:19:35.546 "peer_address": { 00:19:35.546 "adrfam": "IPv4", 00:19:35.546 "traddr": "10.0.0.1", 00:19:35.546 "trsvcid": "53640", 00:19:35.546 "trtype": "TCP" 00:19:35.546 }, 00:19:35.546 "qid": 0, 00:19:35.546 "state": "enabled", 00:19:35.546 "thread": "nvmf_tgt_poll_group_000" 00:19:35.546 } 00:19:35.546 ]' 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.546 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.804 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:35.804 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.804 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.804 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.804 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.063 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:36.063 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.629 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.197 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.455 00:19:37.455 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.455 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.455 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.712 { 00:19:37.712 "auth": { 00:19:37.712 "dhgroup": "ffdhe2048", 00:19:37.712 "digest": "sha384", 00:19:37.712 "state": "completed" 00:19:37.712 }, 00:19:37.712 "cntlid": 57, 00:19:37.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:37.712 "listen_address": { 00:19:37.712 "adrfam": "IPv4", 00:19:37.712 "traddr": "10.0.0.3", 00:19:37.712 "trsvcid": "4420", 00:19:37.712 "trtype": "TCP" 00:19:37.712 }, 00:19:37.712 "peer_address": { 00:19:37.712 "adrfam": "IPv4", 00:19:37.712 "traddr": "10.0.0.1", 00:19:37.712 "trsvcid": "53666", 00:19:37.712 "trtype": "TCP" 00:19:37.712 }, 00:19:37.712 "qid": 0, 00:19:37.712 "state": "enabled", 00:19:37.712 "thread": "nvmf_tgt_poll_group_000" 00:19:37.712 } 00:19:37.712 ]' 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.712 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.970 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.970 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.970 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.970 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.970 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.229 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:38.229 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.796 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.362 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.620 00:19:39.620 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.620 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.620 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.879 { 00:19:39.879 "auth": { 00:19:39.879 "dhgroup": "ffdhe2048", 00:19:39.879 "digest": "sha384", 00:19:39.879 "state": "completed" 00:19:39.879 }, 00:19:39.879 "cntlid": 59, 00:19:39.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:39.879 "listen_address": { 00:19:39.879 "adrfam": "IPv4", 00:19:39.879 "traddr": "10.0.0.3", 00:19:39.879 "trsvcid": "4420", 00:19:39.879 "trtype": "TCP" 00:19:39.879 }, 00:19:39.879 "peer_address": { 00:19:39.879 "adrfam": "IPv4", 00:19:39.879 "traddr": "10.0.0.1", 00:19:39.879 "trsvcid": "43128", 00:19:39.879 "trtype": "TCP" 00:19:39.879 }, 00:19:39.879 "qid": 0, 00:19:39.879 "state": "enabled", 00:19:39.879 "thread": "nvmf_tgt_poll_group_000" 00:19:39.879 } 00:19:39.879 ]' 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.879 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.137 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.137 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.137 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.396 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:40.396 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:40.962 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.962 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:40.963 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.963 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.963 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.963 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.963 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.963 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.221 17:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.790 00:19:41.790 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.790 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.790 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.049 { 00:19:42.049 "auth": { 00:19:42.049 "dhgroup": "ffdhe2048", 00:19:42.049 "digest": "sha384", 00:19:42.049 "state": "completed" 00:19:42.049 }, 00:19:42.049 "cntlid": 61, 00:19:42.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:42.049 "listen_address": { 00:19:42.049 "adrfam": "IPv4", 00:19:42.049 "traddr": "10.0.0.3", 00:19:42.049 "trsvcid": "4420", 00:19:42.049 "trtype": "TCP" 00:19:42.049 }, 00:19:42.049 "peer_address": { 00:19:42.049 "adrfam": "IPv4", 00:19:42.049 "traddr": "10.0.0.1", 00:19:42.049 "trsvcid": "43168", 00:19:42.049 "trtype": "TCP" 00:19:42.049 }, 00:19:42.049 "qid": 0, 00:19:42.049 "state": "enabled", 00:19:42.049 "thread": "nvmf_tgt_poll_group_000" 00:19:42.049 } 00:19:42.049 ]' 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.049 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.617 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:42.617 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.185 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.444 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.703 00:19:43.961 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.961 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.961 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.220 { 00:19:44.220 "auth": { 00:19:44.220 "dhgroup": "ffdhe2048", 00:19:44.220 "digest": "sha384", 00:19:44.220 "state": "completed" 00:19:44.220 }, 00:19:44.220 "cntlid": 63, 00:19:44.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:44.220 "listen_address": { 00:19:44.220 "adrfam": "IPv4", 00:19:44.220 "traddr": "10.0.0.3", 00:19:44.220 "trsvcid": "4420", 00:19:44.220 "trtype": "TCP" 00:19:44.220 }, 00:19:44.220 "peer_address": { 00:19:44.220 "adrfam": "IPv4", 00:19:44.220 "traddr": "10.0.0.1", 00:19:44.220 "trsvcid": "43190", 00:19:44.220 "trtype": "TCP" 00:19:44.220 }, 00:19:44.220 "qid": 0, 00:19:44.220 "state": "enabled", 00:19:44.220 "thread": "nvmf_tgt_poll_group_000" 00:19:44.220 } 00:19:44.220 ]' 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.220 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.787 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:44.787 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.355 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.613 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.614 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.204 00:19:46.204 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.204 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.204 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.462 { 00:19:46.462 "auth": { 00:19:46.462 "dhgroup": "ffdhe3072", 00:19:46.462 "digest": "sha384", 00:19:46.462 "state": "completed" 00:19:46.462 }, 00:19:46.462 "cntlid": 65, 00:19:46.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:46.462 "listen_address": { 00:19:46.462 "adrfam": "IPv4", 00:19:46.462 "traddr": "10.0.0.3", 00:19:46.462 "trsvcid": "4420", 00:19:46.462 "trtype": "TCP" 00:19:46.462 }, 00:19:46.462 "peer_address": { 00:19:46.462 "adrfam": "IPv4", 00:19:46.462 "traddr": "10.0.0.1", 00:19:46.462 "trsvcid": "43204", 00:19:46.462 "trtype": "TCP" 00:19:46.462 }, 00:19:46.462 "qid": 0, 00:19:46.462 "state": "enabled", 00:19:46.462 "thread": "nvmf_tgt_poll_group_000" 00:19:46.462 } 00:19:46.462 ]' 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.462 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.027 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:47.027 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.601 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.164 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.421 00:19:48.421 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.421 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.421 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.677 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.678 { 00:19:48.678 "auth": { 00:19:48.678 "dhgroup": "ffdhe3072", 00:19:48.678 "digest": "sha384", 00:19:48.678 "state": "completed" 00:19:48.678 }, 00:19:48.678 "cntlid": 67, 00:19:48.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:48.678 "listen_address": { 00:19:48.678 "adrfam": "IPv4", 00:19:48.678 "traddr": "10.0.0.3", 00:19:48.678 "trsvcid": "4420", 00:19:48.678 "trtype": "TCP" 00:19:48.678 }, 00:19:48.678 "peer_address": { 00:19:48.678 "adrfam": "IPv4", 00:19:48.678 "traddr": "10.0.0.1", 00:19:48.678 "trsvcid": "48096", 00:19:48.678 "trtype": "TCP" 00:19:48.678 }, 00:19:48.678 "qid": 0, 00:19:48.678 "state": "enabled", 00:19:48.678 "thread": "nvmf_tgt_poll_group_000" 00:19:48.678 } 00:19:48.678 ]' 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.678 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.935 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.935 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.935 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.935 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.935 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.193 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:49.193 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.128 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.694 00:19:50.694 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.694 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.694 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.953 { 00:19:50.953 "auth": { 00:19:50.953 "dhgroup": "ffdhe3072", 00:19:50.953 "digest": "sha384", 00:19:50.953 "state": "completed" 00:19:50.953 }, 00:19:50.953 "cntlid": 69, 00:19:50.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:50.953 "listen_address": { 00:19:50.953 "adrfam": "IPv4", 00:19:50.953 "traddr": "10.0.0.3", 00:19:50.953 "trsvcid": "4420", 00:19:50.953 "trtype": "TCP" 00:19:50.953 }, 00:19:50.953 "peer_address": { 00:19:50.953 "adrfam": "IPv4", 00:19:50.953 "traddr": "10.0.0.1", 00:19:50.953 "trsvcid": "48124", 00:19:50.953 "trtype": "TCP" 00:19:50.953 }, 00:19:50.953 "qid": 0, 00:19:50.953 "state": "enabled", 00:19:50.953 "thread": "nvmf_tgt_poll_group_000" 00:19:50.953 } 00:19:50.953 ]' 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.953 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.212 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.212 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.212 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.212 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.212 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.470 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:51.470 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.404 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.663 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.923 00:19:52.923 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.923 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.923 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.491 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.491 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.491 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.491 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.491 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.491 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.491 { 00:19:53.491 "auth": { 00:19:53.491 "dhgroup": "ffdhe3072", 00:19:53.491 "digest": "sha384", 00:19:53.491 "state": "completed" 00:19:53.491 }, 00:19:53.491 "cntlid": 71, 00:19:53.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:53.491 "listen_address": { 00:19:53.491 "adrfam": "IPv4", 00:19:53.491 "traddr": "10.0.0.3", 00:19:53.491 "trsvcid": "4420", 00:19:53.491 "trtype": "TCP" 00:19:53.491 }, 00:19:53.491 "peer_address": { 00:19:53.491 "adrfam": "IPv4", 00:19:53.491 "traddr": "10.0.0.1", 00:19:53.491 "trsvcid": "48142", 00:19:53.491 "trtype": "TCP" 00:19:53.491 }, 00:19:53.491 "qid": 0, 00:19:53.491 "state": "enabled", 00:19:53.491 "thread": "nvmf_tgt_poll_group_000" 00:19:53.491 } 00:19:53.491 ]' 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.491 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.751 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:53.751 17:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.686 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.945 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.204 00:19:55.204 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.204 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.204 17:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.462 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.720 { 00:19:55.720 "auth": { 00:19:55.720 "dhgroup": "ffdhe4096", 00:19:55.720 "digest": "sha384", 00:19:55.720 "state": "completed" 00:19:55.720 }, 00:19:55.720 "cntlid": 73, 00:19:55.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:55.720 "listen_address": { 00:19:55.720 "adrfam": "IPv4", 00:19:55.720 "traddr": "10.0.0.3", 00:19:55.720 "trsvcid": "4420", 00:19:55.720 "trtype": "TCP" 00:19:55.720 }, 00:19:55.720 "peer_address": { 00:19:55.720 "adrfam": "IPv4", 00:19:55.720 "traddr": "10.0.0.1", 00:19:55.720 "trsvcid": "48174", 00:19:55.720 "trtype": "TCP" 00:19:55.720 }, 00:19:55.720 "qid": 0, 00:19:55.720 "state": "enabled", 00:19:55.720 "thread": "nvmf_tgt_poll_group_000" 00:19:55.720 } 00:19:55.720 ]' 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.720 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.978 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:55.978 17:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.914 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.173 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.431 00:19:57.431 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.431 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.431 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.689 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.689 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.689 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.689 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.947 { 00:19:57.947 "auth": { 00:19:57.947 "dhgroup": "ffdhe4096", 00:19:57.947 "digest": "sha384", 00:19:57.947 "state": "completed" 00:19:57.947 }, 00:19:57.947 "cntlid": 75, 00:19:57.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:19:57.947 "listen_address": { 00:19:57.947 "adrfam": "IPv4", 00:19:57.947 "traddr": "10.0.0.3", 00:19:57.947 "trsvcid": "4420", 00:19:57.947 "trtype": "TCP" 00:19:57.947 }, 00:19:57.947 "peer_address": { 00:19:57.947 "adrfam": "IPv4", 00:19:57.947 "traddr": "10.0.0.1", 00:19:57.947 "trsvcid": "48198", 00:19:57.947 "trtype": "TCP" 00:19:57.947 }, 00:19:57.947 "qid": 0, 00:19:57.947 "state": "enabled", 00:19:57.947 "thread": "nvmf_tgt_poll_group_000" 00:19:57.947 } 00:19:57.947 ]' 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.947 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.206 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:58.206 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.773 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.340 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.599 00:19:59.599 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.599 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.599 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.857 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.857 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.857 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.857 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.116 { 00:20:00.116 "auth": { 00:20:00.116 "dhgroup": "ffdhe4096", 00:20:00.116 "digest": "sha384", 00:20:00.116 "state": "completed" 00:20:00.116 }, 00:20:00.116 "cntlid": 77, 00:20:00.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:00.116 "listen_address": { 00:20:00.116 "adrfam": "IPv4", 00:20:00.116 "traddr": "10.0.0.3", 00:20:00.116 "trsvcid": "4420", 00:20:00.116 "trtype": "TCP" 00:20:00.116 }, 00:20:00.116 "peer_address": { 00:20:00.116 "adrfam": "IPv4", 00:20:00.116 "traddr": "10.0.0.1", 00:20:00.116 "trsvcid": "36066", 00:20:00.116 "trtype": "TCP" 00:20:00.116 }, 00:20:00.116 "qid": 0, 00:20:00.116 "state": "enabled", 00:20:00.116 "thread": "nvmf_tgt_poll_group_000" 00:20:00.116 } 00:20:00.116 ]' 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.116 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.375 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:00.375 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.311 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.570 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.138 00:20:02.138 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.138 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.138 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.396 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.396 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.396 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.396 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.396 { 00:20:02.396 "auth": { 00:20:02.396 "dhgroup": "ffdhe4096", 00:20:02.396 "digest": "sha384", 00:20:02.396 "state": "completed" 00:20:02.396 }, 00:20:02.396 "cntlid": 79, 00:20:02.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:02.396 "listen_address": { 00:20:02.396 "adrfam": "IPv4", 00:20:02.396 "traddr": "10.0.0.3", 00:20:02.396 "trsvcid": "4420", 00:20:02.396 "trtype": "TCP" 00:20:02.396 }, 00:20:02.396 "peer_address": { 00:20:02.396 "adrfam": "IPv4", 00:20:02.396 "traddr": "10.0.0.1", 00:20:02.396 "trsvcid": "36096", 00:20:02.396 "trtype": "TCP" 00:20:02.396 }, 00:20:02.396 "qid": 0, 00:20:02.396 "state": "enabled", 00:20:02.396 "thread": "nvmf_tgt_poll_group_000" 00:20:02.396 } 00:20:02.396 ]' 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.396 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.655 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:02.655 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.590 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.848 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.417 00:20:04.417 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.417 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.417 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.675 { 00:20:04.675 "auth": { 00:20:04.675 "dhgroup": "ffdhe6144", 00:20:04.675 "digest": "sha384", 00:20:04.675 "state": "completed" 00:20:04.675 }, 00:20:04.675 "cntlid": 81, 00:20:04.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:04.675 "listen_address": { 00:20:04.675 "adrfam": "IPv4", 00:20:04.675 "traddr": "10.0.0.3", 00:20:04.675 "trsvcid": "4420", 00:20:04.675 "trtype": "TCP" 00:20:04.675 }, 00:20:04.675 "peer_address": { 00:20:04.675 "adrfam": "IPv4", 00:20:04.675 "traddr": "10.0.0.1", 00:20:04.675 "trsvcid": "36110", 00:20:04.675 "trtype": "TCP" 00:20:04.675 }, 00:20:04.675 "qid": 0, 00:20:04.675 "state": "enabled", 00:20:04.675 "thread": "nvmf_tgt_poll_group_000" 00:20:04.675 } 00:20:04.675 ]' 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.675 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.242 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:05.242 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.811 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.069 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.724 00:20:06.724 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.724 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.724 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.983 { 00:20:06.983 "auth": { 00:20:06.983 "dhgroup": "ffdhe6144", 00:20:06.983 "digest": "sha384", 00:20:06.983 "state": "completed" 00:20:06.983 }, 00:20:06.983 "cntlid": 83, 00:20:06.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:06.983 "listen_address": { 00:20:06.983 "adrfam": "IPv4", 00:20:06.983 "traddr": "10.0.0.3", 00:20:06.983 "trsvcid": "4420", 00:20:06.983 "trtype": "TCP" 00:20:06.983 }, 00:20:06.983 "peer_address": { 00:20:06.983 "adrfam": "IPv4", 00:20:06.983 "traddr": "10.0.0.1", 00:20:06.983 "trsvcid": "36136", 00:20:06.983 "trtype": "TCP" 00:20:06.983 }, 00:20:06.983 "qid": 0, 00:20:06.983 "state": "enabled", 00:20:06.983 "thread": "nvmf_tgt_poll_group_000" 00:20:06.983 } 00:20:06.983 ]' 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.983 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.241 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.241 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.241 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.241 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.241 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.499 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:07.499 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.435 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.693 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:08.693 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.693 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.693 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:08.693 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.693 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.693 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.694 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.694 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.694 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.694 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.694 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.694 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.260 00:20:09.260 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.260 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.260 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.519 { 00:20:09.519 "auth": { 00:20:09.519 "dhgroup": "ffdhe6144", 00:20:09.519 "digest": "sha384", 00:20:09.519 "state": "completed" 00:20:09.519 }, 00:20:09.519 "cntlid": 85, 00:20:09.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:09.519 "listen_address": { 00:20:09.519 "adrfam": "IPv4", 00:20:09.519 "traddr": "10.0.0.3", 00:20:09.519 "trsvcid": "4420", 00:20:09.519 "trtype": "TCP" 00:20:09.519 }, 00:20:09.519 "peer_address": { 00:20:09.519 "adrfam": "IPv4", 00:20:09.519 "traddr": "10.0.0.1", 00:20:09.519 "trsvcid": "58736", 00:20:09.519 "trtype": "TCP" 00:20:09.519 }, 00:20:09.519 "qid": 0, 00:20:09.519 "state": "enabled", 00:20:09.519 "thread": "nvmf_tgt_poll_group_000" 00:20:09.519 } 00:20:09.519 ]' 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.519 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.085 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:10.085 17:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:10.650 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.650 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:10.650 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.651 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.651 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.651 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.651 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.651 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:20:10.909 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.910 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.910 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.910 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.910 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.910 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.476 00:20:11.476 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.476 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.476 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.735 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.735 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.735 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.735 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.735 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.735 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.735 { 00:20:11.735 "auth": { 00:20:11.735 "dhgroup": "ffdhe6144", 00:20:11.735 "digest": "sha384", 00:20:11.735 "state": "completed" 00:20:11.735 }, 00:20:11.735 "cntlid": 87, 00:20:11.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:11.735 "listen_address": { 00:20:11.735 "adrfam": "IPv4", 00:20:11.735 "traddr": "10.0.0.3", 00:20:11.735 "trsvcid": "4420", 00:20:11.735 "trtype": "TCP" 00:20:11.735 }, 00:20:11.735 "peer_address": { 00:20:11.735 "adrfam": "IPv4", 00:20:11.735 "traddr": "10.0.0.1", 00:20:11.735 "trsvcid": "58766", 00:20:11.735 "trtype": "TCP" 00:20:11.735 }, 00:20:11.735 "qid": 0, 00:20:11.735 "state": "enabled", 00:20:11.735 "thread": "nvmf_tgt_poll_group_000" 00:20:11.735 } 00:20:11.735 ]' 00:20:11.735 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.993 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.993 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.993 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.993 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.993 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.993 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.993 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.252 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:12.252 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.908 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.180 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.116 00:20:14.116 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.116 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.116 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.375 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.375 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.375 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.375 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.375 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.375 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.375 { 00:20:14.375 "auth": { 00:20:14.375 "dhgroup": "ffdhe8192", 00:20:14.375 "digest": "sha384", 00:20:14.375 "state": "completed" 00:20:14.375 }, 00:20:14.375 "cntlid": 89, 00:20:14.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:14.375 "listen_address": { 00:20:14.375 "adrfam": "IPv4", 00:20:14.375 "traddr": "10.0.0.3", 00:20:14.375 "trsvcid": "4420", 00:20:14.375 "trtype": "TCP" 00:20:14.375 }, 00:20:14.375 "peer_address": { 00:20:14.375 "adrfam": "IPv4", 00:20:14.375 "traddr": "10.0.0.1", 00:20:14.375 "trsvcid": "58798", 00:20:14.375 "trtype": "TCP" 00:20:14.375 }, 00:20:14.375 "qid": 0, 00:20:14.375 "state": "enabled", 00:20:14.375 "thread": "nvmf_tgt_poll_group_000" 00:20:14.375 } 00:20:14.375 ]' 00:20:14.375 17:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.375 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.375 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.375 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.375 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.375 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.375 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.375 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.940 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:14.940 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.506 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.764 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.330 00:20:16.330 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.330 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.330 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.895 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.895 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.896 { 00:20:16.896 "auth": { 00:20:16.896 "dhgroup": "ffdhe8192", 00:20:16.896 "digest": "sha384", 00:20:16.896 "state": "completed" 00:20:16.896 }, 00:20:16.896 "cntlid": 91, 00:20:16.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:16.896 "listen_address": { 00:20:16.896 "adrfam": "IPv4", 00:20:16.896 "traddr": "10.0.0.3", 00:20:16.896 "trsvcid": "4420", 00:20:16.896 "trtype": "TCP" 00:20:16.896 }, 00:20:16.896 "peer_address": { 00:20:16.896 "adrfam": "IPv4", 00:20:16.896 "traddr": "10.0.0.1", 00:20:16.896 "trsvcid": "58826", 00:20:16.896 "trtype": "TCP" 00:20:16.896 }, 00:20:16.896 "qid": 0, 00:20:16.896 "state": "enabled", 00:20:16.896 "thread": "nvmf_tgt_poll_group_000" 00:20:16.896 } 00:20:16.896 ]' 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.896 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.154 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:17.154 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.115 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.373 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.938 00:20:18.938 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.938 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.939 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.196 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.196 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.197 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.197 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.454 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.454 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.454 { 00:20:19.454 "auth": { 00:20:19.455 "dhgroup": "ffdhe8192", 00:20:19.455 "digest": "sha384", 00:20:19.455 "state": "completed" 00:20:19.455 }, 00:20:19.455 "cntlid": 93, 00:20:19.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:19.455 "listen_address": { 00:20:19.455 "adrfam": "IPv4", 00:20:19.455 "traddr": "10.0.0.3", 00:20:19.455 "trsvcid": "4420", 00:20:19.455 "trtype": "TCP" 00:20:19.455 }, 00:20:19.455 "peer_address": { 00:20:19.455 "adrfam": "IPv4", 00:20:19.455 "traddr": "10.0.0.1", 00:20:19.455 "trsvcid": "55358", 00:20:19.455 "trtype": "TCP" 00:20:19.455 }, 00:20:19.455 "qid": 0, 00:20:19.455 "state": "enabled", 00:20:19.455 "thread": "nvmf_tgt_poll_group_000" 00:20:19.455 } 00:20:19.455 ]' 00:20:19.455 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.455 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.455 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.455 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.455 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.455 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.455 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.455 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.712 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:19.712 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.644 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.902 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.468 00:20:21.468 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.468 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.468 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.106 { 00:20:22.106 "auth": { 00:20:22.106 "dhgroup": "ffdhe8192", 00:20:22.106 "digest": "sha384", 00:20:22.106 "state": "completed" 00:20:22.106 }, 00:20:22.106 "cntlid": 95, 00:20:22.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:22.106 "listen_address": { 00:20:22.106 "adrfam": "IPv4", 00:20:22.106 "traddr": "10.0.0.3", 00:20:22.106 "trsvcid": "4420", 00:20:22.106 "trtype": "TCP" 00:20:22.106 }, 00:20:22.106 "peer_address": { 00:20:22.106 "adrfam": "IPv4", 00:20:22.106 "traddr": "10.0.0.1", 00:20:22.106 "trsvcid": "55400", 00:20:22.106 "trtype": "TCP" 00:20:22.106 }, 00:20:22.106 "qid": 0, 00:20:22.106 "state": "enabled", 00:20:22.106 "thread": "nvmf_tgt_poll_group_000" 00:20:22.106 } 00:20:22.106 ]' 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.106 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.379 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:22.379 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.945 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.203 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.204 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.204 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.204 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.770 00:20:23.770 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.770 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.770 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.029 { 00:20:24.029 "auth": { 00:20:24.029 "dhgroup": "null", 00:20:24.029 "digest": "sha512", 00:20:24.029 "state": "completed" 00:20:24.029 }, 00:20:24.029 "cntlid": 97, 00:20:24.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:24.029 "listen_address": { 00:20:24.029 "adrfam": "IPv4", 00:20:24.029 "traddr": "10.0.0.3", 00:20:24.029 "trsvcid": "4420", 00:20:24.029 "trtype": "TCP" 00:20:24.029 }, 00:20:24.029 "peer_address": { 00:20:24.029 "adrfam": "IPv4", 00:20:24.029 "traddr": "10.0.0.1", 00:20:24.029 "trsvcid": "55444", 00:20:24.029 "trtype": "TCP" 00:20:24.029 }, 00:20:24.029 "qid": 0, 00:20:24.029 "state": "enabled", 00:20:24.029 "thread": "nvmf_tgt_poll_group_000" 00:20:24.029 } 00:20:24.029 ]' 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.029 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.287 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.287 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.287 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.287 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.287 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.546 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:24.546 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.113 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.681 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.939 00:20:25.939 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.939 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.939 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.198 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.198 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.198 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.198 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.198 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.198 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.198 { 00:20:26.198 "auth": { 00:20:26.198 "dhgroup": "null", 00:20:26.198 "digest": "sha512", 00:20:26.198 "state": "completed" 00:20:26.198 }, 00:20:26.198 "cntlid": 99, 00:20:26.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:26.198 "listen_address": { 00:20:26.198 "adrfam": "IPv4", 00:20:26.198 "traddr": "10.0.0.3", 00:20:26.198 "trsvcid": "4420", 00:20:26.198 "trtype": "TCP" 00:20:26.198 }, 00:20:26.198 "peer_address": { 00:20:26.198 "adrfam": "IPv4", 00:20:26.198 "traddr": "10.0.0.1", 00:20:26.198 "trsvcid": "55464", 00:20:26.198 "trtype": "TCP" 00:20:26.198 }, 00:20:26.198 "qid": 0, 00:20:26.198 "state": "enabled", 00:20:26.198 "thread": "nvmf_tgt_poll_group_000" 00:20:26.198 } 00:20:26.198 ]' 00:20:26.198 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.199 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.199 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.199 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.199 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.457 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.457 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.457 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.716 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:26.716 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:27.282 17:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.541 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.111 00:20:28.111 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.111 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.111 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.370 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.370 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.370 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.370 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.370 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.370 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.370 { 00:20:28.370 "auth": { 00:20:28.370 "dhgroup": "null", 00:20:28.370 "digest": "sha512", 00:20:28.370 "state": "completed" 00:20:28.370 }, 00:20:28.370 "cntlid": 101, 00:20:28.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:28.370 "listen_address": { 00:20:28.370 "adrfam": "IPv4", 00:20:28.370 "traddr": "10.0.0.3", 00:20:28.370 "trsvcid": "4420", 00:20:28.370 "trtype": "TCP" 00:20:28.370 }, 00:20:28.370 "peer_address": { 00:20:28.370 "adrfam": "IPv4", 00:20:28.370 "traddr": "10.0.0.1", 00:20:28.370 "trsvcid": "55496", 00:20:28.370 "trtype": "TCP" 00:20:28.370 }, 00:20:28.370 "qid": 0, 00:20:28.370 "state": "enabled", 00:20:28.370 "thread": "nvmf_tgt_poll_group_000" 00:20:28.370 } 00:20:28.370 ]' 00:20:28.370 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.370 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.370 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.370 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.370 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.370 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.370 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.370 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.630 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:28.630 17:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.566 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:29.567 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.567 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.567 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:20:29.567 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.567 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.825 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.825 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.826 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.826 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.085 00:20:30.085 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.085 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.085 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.344 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.344 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.344 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.344 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.344 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.344 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.344 { 00:20:30.344 "auth": { 00:20:30.344 "dhgroup": "null", 00:20:30.344 "digest": "sha512", 00:20:30.344 "state": "completed" 00:20:30.344 }, 00:20:30.344 "cntlid": 103, 00:20:30.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:30.344 "listen_address": { 00:20:30.344 "adrfam": "IPv4", 00:20:30.344 "traddr": "10.0.0.3", 00:20:30.344 "trsvcid": "4420", 00:20:30.344 "trtype": "TCP" 00:20:30.344 }, 00:20:30.344 "peer_address": { 00:20:30.344 "adrfam": "IPv4", 00:20:30.344 "traddr": "10.0.0.1", 00:20:30.344 "trsvcid": "44938", 00:20:30.344 "trtype": "TCP" 00:20:30.344 }, 00:20:30.344 "qid": 0, 00:20:30.344 "state": "enabled", 00:20:30.344 "thread": "nvmf_tgt_poll_group_000" 00:20:30.344 } 00:20:30.344 ]' 00:20:30.344 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.603 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.603 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.603 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:30.603 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.603 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.603 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.603 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.862 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:30.862 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:31.836 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.095 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.354 00:20:32.354 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.354 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.354 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.613 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.613 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.613 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.613 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.613 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.613 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.613 { 00:20:32.613 "auth": { 00:20:32.613 "dhgroup": "ffdhe2048", 00:20:32.613 "digest": "sha512", 00:20:32.613 "state": "completed" 00:20:32.613 }, 00:20:32.613 "cntlid": 105, 00:20:32.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:32.613 "listen_address": { 00:20:32.613 "adrfam": "IPv4", 00:20:32.613 "traddr": "10.0.0.3", 00:20:32.613 "trsvcid": "4420", 00:20:32.613 "trtype": "TCP" 00:20:32.613 }, 00:20:32.613 "peer_address": { 00:20:32.613 "adrfam": "IPv4", 00:20:32.613 "traddr": "10.0.0.1", 00:20:32.613 "trsvcid": "44956", 00:20:32.613 "trtype": "TCP" 00:20:32.613 }, 00:20:32.613 "qid": 0, 00:20:32.613 "state": "enabled", 00:20:32.613 "thread": "nvmf_tgt_poll_group_000" 00:20:32.613 } 00:20:32.613 ]' 00:20:32.613 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.871 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.871 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.871 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.871 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.871 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.871 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.871 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.130 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:33.130 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.068 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.326 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:34.326 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.326 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.326 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.326 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.326 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.326 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.327 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.327 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.327 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.327 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.327 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.327 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.585 00:20:34.844 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.844 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.844 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.102 { 00:20:35.102 "auth": { 00:20:35.102 "dhgroup": "ffdhe2048", 00:20:35.102 "digest": "sha512", 00:20:35.102 "state": "completed" 00:20:35.102 }, 00:20:35.102 "cntlid": 107, 00:20:35.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:35.102 "listen_address": { 00:20:35.102 "adrfam": "IPv4", 00:20:35.102 "traddr": "10.0.0.3", 00:20:35.102 "trsvcid": "4420", 00:20:35.102 "trtype": "TCP" 00:20:35.102 }, 00:20:35.102 "peer_address": { 00:20:35.102 "adrfam": "IPv4", 00:20:35.102 "traddr": "10.0.0.1", 00:20:35.102 "trsvcid": "44984", 00:20:35.102 "trtype": "TCP" 00:20:35.102 }, 00:20:35.102 "qid": 0, 00:20:35.102 "state": "enabled", 00:20:35.102 "thread": "nvmf_tgt_poll_group_000" 00:20:35.102 } 00:20:35.102 ]' 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.102 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.670 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:35.670 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.237 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.497 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.080 00:20:37.080 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.080 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.080 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.339 17:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.339 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.339 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.339 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.339 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.339 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.339 { 00:20:37.339 "auth": { 00:20:37.339 "dhgroup": "ffdhe2048", 00:20:37.339 "digest": "sha512", 00:20:37.339 "state": "completed" 00:20:37.339 }, 00:20:37.339 "cntlid": 109, 00:20:37.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:37.339 "listen_address": { 00:20:37.339 "adrfam": "IPv4", 00:20:37.339 "traddr": "10.0.0.3", 00:20:37.339 "trsvcid": "4420", 00:20:37.339 "trtype": "TCP" 00:20:37.339 }, 00:20:37.339 "peer_address": { 00:20:37.339 "adrfam": "IPv4", 00:20:37.339 "traddr": "10.0.0.1", 00:20:37.339 "trsvcid": "45016", 00:20:37.339 "trtype": "TCP" 00:20:37.339 }, 00:20:37.339 "qid": 0, 00:20:37.339 "state": "enabled", 00:20:37.339 "thread": "nvmf_tgt_poll_group_000" 00:20:37.340 } 00:20:37.340 ]' 00:20:37.340 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.340 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.340 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.340 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.340 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.608 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.608 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.608 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.866 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:37.866 17:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:38.435 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.002 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.262 00:20:39.262 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.262 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.262 17:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.521 { 00:20:39.521 "auth": { 00:20:39.521 "dhgroup": "ffdhe2048", 00:20:39.521 "digest": "sha512", 00:20:39.521 "state": "completed" 00:20:39.521 }, 00:20:39.521 "cntlid": 111, 00:20:39.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:39.521 "listen_address": { 00:20:39.521 "adrfam": "IPv4", 00:20:39.521 "traddr": "10.0.0.3", 00:20:39.521 "trsvcid": "4420", 00:20:39.521 "trtype": "TCP" 00:20:39.521 }, 00:20:39.521 "peer_address": { 00:20:39.521 "adrfam": "IPv4", 00:20:39.521 "traddr": "10.0.0.1", 00:20:39.521 "trsvcid": "52896", 00:20:39.521 "trtype": "TCP" 00:20:39.521 }, 00:20:39.521 "qid": 0, 00:20:39.521 "state": "enabled", 00:20:39.521 "thread": "nvmf_tgt_poll_group_000" 00:20:39.521 } 00:20:39.521 ]' 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.521 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.780 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.780 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.780 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.780 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.780 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.040 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:40.040 17:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.608 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.175 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.434 00:20:41.434 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.434 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.434 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.694 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.694 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.694 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.694 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.694 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.694 { 00:20:41.694 "auth": { 00:20:41.694 "dhgroup": "ffdhe3072", 00:20:41.694 "digest": "sha512", 00:20:41.694 "state": "completed" 00:20:41.694 }, 00:20:41.694 "cntlid": 113, 00:20:41.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:41.694 "listen_address": { 00:20:41.694 "adrfam": "IPv4", 00:20:41.694 "traddr": "10.0.0.3", 00:20:41.694 "trsvcid": "4420", 00:20:41.694 "trtype": "TCP" 00:20:41.694 }, 00:20:41.694 "peer_address": { 00:20:41.694 "adrfam": "IPv4", 00:20:41.694 "traddr": "10.0.0.1", 00:20:41.694 "trsvcid": "52912", 00:20:41.694 "trtype": "TCP" 00:20:41.694 }, 00:20:41.694 "qid": 0, 00:20:41.694 "state": "enabled", 00:20:41.694 "thread": "nvmf_tgt_poll_group_000" 00:20:41.694 } 00:20:41.694 ]' 00:20:41.694 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.953 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.953 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.953 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.953 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.953 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.953 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.953 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.211 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:42.211 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:43.227 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.227 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.485 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.485 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.485 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.485 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.743 00:20:43.743 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.743 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.743 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.001 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.001 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.001 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.001 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.001 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.001 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.001 { 00:20:44.001 "auth": { 00:20:44.001 "dhgroup": "ffdhe3072", 00:20:44.001 "digest": "sha512", 00:20:44.001 "state": "completed" 00:20:44.001 }, 00:20:44.001 "cntlid": 115, 00:20:44.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:44.001 "listen_address": { 00:20:44.001 "adrfam": "IPv4", 00:20:44.001 "traddr": "10.0.0.3", 00:20:44.001 "trsvcid": "4420", 00:20:44.001 "trtype": "TCP" 00:20:44.001 }, 00:20:44.001 "peer_address": { 00:20:44.001 "adrfam": "IPv4", 00:20:44.001 "traddr": "10.0.0.1", 00:20:44.001 "trsvcid": "52952", 00:20:44.001 "trtype": "TCP" 00:20:44.001 }, 00:20:44.002 "qid": 0, 00:20:44.002 "state": "enabled", 00:20:44.002 "thread": "nvmf_tgt_poll_group_000" 00:20:44.002 } 00:20:44.002 ]' 00:20:44.002 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.002 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.002 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.260 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.260 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.260 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.260 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.260 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.518 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:44.518 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.452 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.711 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.969 00:20:45.969 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.969 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.969 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.536 { 00:20:46.536 "auth": { 00:20:46.536 "dhgroup": "ffdhe3072", 00:20:46.536 "digest": "sha512", 00:20:46.536 "state": "completed" 00:20:46.536 }, 00:20:46.536 "cntlid": 117, 00:20:46.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:46.536 "listen_address": { 00:20:46.536 "adrfam": "IPv4", 00:20:46.536 "traddr": "10.0.0.3", 00:20:46.536 "trsvcid": "4420", 00:20:46.536 "trtype": "TCP" 00:20:46.536 }, 00:20:46.536 "peer_address": { 00:20:46.536 "adrfam": "IPv4", 00:20:46.536 "traddr": "10.0.0.1", 00:20:46.536 "trsvcid": "52966", 00:20:46.536 "trtype": "TCP" 00:20:46.536 }, 00:20:46.536 "qid": 0, 00:20:46.536 "state": "enabled", 00:20:46.536 "thread": "nvmf_tgt_poll_group_000" 00:20:46.536 } 00:20:46.536 ]' 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.536 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.794 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:46.794 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.730 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.988 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.247 00:20:48.247 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.247 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.247 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.814 { 00:20:48.814 "auth": { 00:20:48.814 "dhgroup": "ffdhe3072", 00:20:48.814 "digest": "sha512", 00:20:48.814 "state": "completed" 00:20:48.814 }, 00:20:48.814 "cntlid": 119, 00:20:48.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:48.814 "listen_address": { 00:20:48.814 "adrfam": "IPv4", 00:20:48.814 "traddr": "10.0.0.3", 00:20:48.814 "trsvcid": "4420", 00:20:48.814 "trtype": "TCP" 00:20:48.814 }, 00:20:48.814 "peer_address": { 00:20:48.814 "adrfam": "IPv4", 00:20:48.814 "traddr": "10.0.0.1", 00:20:48.814 "trsvcid": "48392", 00:20:48.814 "trtype": "TCP" 00:20:48.814 }, 00:20:48.814 "qid": 0, 00:20:48.814 "state": "enabled", 00:20:48.814 "thread": "nvmf_tgt_poll_group_000" 00:20:48.814 } 00:20:48.814 ]' 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.814 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.073 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:49.073 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:49.681 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.940 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.508 00:20:50.508 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.508 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.508 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.767 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.767 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.767 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.767 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.767 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.767 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.767 { 00:20:50.767 "auth": { 00:20:50.767 "dhgroup": "ffdhe4096", 00:20:50.767 "digest": "sha512", 00:20:50.767 "state": "completed" 00:20:50.767 }, 00:20:50.767 "cntlid": 121, 00:20:50.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:50.767 "listen_address": { 00:20:50.767 "adrfam": "IPv4", 00:20:50.767 "traddr": "10.0.0.3", 00:20:50.767 "trsvcid": "4420", 00:20:50.767 "trtype": "TCP" 00:20:50.767 }, 00:20:50.767 "peer_address": { 00:20:50.767 "adrfam": "IPv4", 00:20:50.767 "traddr": "10.0.0.1", 00:20:50.767 "trsvcid": "48430", 00:20:50.767 "trtype": "TCP" 00:20:50.767 }, 00:20:50.767 "qid": 0, 00:20:50.767 "state": "enabled", 00:20:50.767 "thread": "nvmf_tgt_poll_group_000" 00:20:50.767 } 00:20:50.767 ]' 00:20:50.767 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.027 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.027 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.027 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.027 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.027 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.027 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.027 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.285 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:51.285 17:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:20:51.855 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.113 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:52.114 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.114 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.114 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.114 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.114 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.114 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.373 17:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.941 00:20:52.941 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.941 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.941 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.199 { 00:20:53.199 "auth": { 00:20:53.199 "dhgroup": "ffdhe4096", 00:20:53.199 "digest": "sha512", 00:20:53.199 "state": "completed" 00:20:53.199 }, 00:20:53.199 "cntlid": 123, 00:20:53.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:53.199 "listen_address": { 00:20:53.199 "adrfam": "IPv4", 00:20:53.199 "traddr": "10.0.0.3", 00:20:53.199 "trsvcid": "4420", 00:20:53.199 "trtype": "TCP" 00:20:53.199 }, 00:20:53.199 "peer_address": { 00:20:53.199 "adrfam": "IPv4", 00:20:53.199 "traddr": "10.0.0.1", 00:20:53.199 "trsvcid": "48464", 00:20:53.199 "trtype": "TCP" 00:20:53.199 }, 00:20:53.199 "qid": 0, 00:20:53.199 "state": "enabled", 00:20:53.199 "thread": "nvmf_tgt_poll_group_000" 00:20:53.199 } 00:20:53.199 ]' 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.199 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.200 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.200 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.200 17:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.458 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.458 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.458 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.716 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:53.716 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.280 17:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.537 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.793 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.793 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.793 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.793 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.112 00:20:55.112 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.112 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.112 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.368 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.368 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.368 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.368 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.368 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.368 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.368 { 00:20:55.368 "auth": { 00:20:55.368 "dhgroup": "ffdhe4096", 00:20:55.368 "digest": "sha512", 00:20:55.368 "state": "completed" 00:20:55.369 }, 00:20:55.369 "cntlid": 125, 00:20:55.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:55.369 "listen_address": { 00:20:55.369 "adrfam": "IPv4", 00:20:55.369 "traddr": "10.0.0.3", 00:20:55.369 "trsvcid": "4420", 00:20:55.369 "trtype": "TCP" 00:20:55.369 }, 00:20:55.369 "peer_address": { 00:20:55.369 "adrfam": "IPv4", 00:20:55.369 "traddr": "10.0.0.1", 00:20:55.369 "trsvcid": "48498", 00:20:55.369 "trtype": "TCP" 00:20:55.369 }, 00:20:55.369 "qid": 0, 00:20:55.369 "state": "enabled", 00:20:55.369 "thread": "nvmf_tgt_poll_group_000" 00:20:55.369 } 00:20:55.369 ]' 00:20:55.369 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.369 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.369 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.626 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.626 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.626 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.626 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.626 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.884 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:55.884 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:20:56.817 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.817 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:56.817 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.817 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.817 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.817 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.817 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.818 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.076 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.334 00:20:57.334 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.334 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.334 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.907 { 00:20:57.907 "auth": { 00:20:57.907 "dhgroup": "ffdhe4096", 00:20:57.907 "digest": "sha512", 00:20:57.907 "state": "completed" 00:20:57.907 }, 00:20:57.907 "cntlid": 127, 00:20:57.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:20:57.907 "listen_address": { 00:20:57.907 "adrfam": "IPv4", 00:20:57.907 "traddr": "10.0.0.3", 00:20:57.907 "trsvcid": "4420", 00:20:57.907 "trtype": "TCP" 00:20:57.907 }, 00:20:57.907 "peer_address": { 00:20:57.907 "adrfam": "IPv4", 00:20:57.907 "traddr": "10.0.0.1", 00:20:57.907 "trsvcid": "48520", 00:20:57.907 "trtype": "TCP" 00:20:57.907 }, 00:20:57.907 "qid": 0, 00:20:57.907 "state": "enabled", 00:20:57.907 "thread": "nvmf_tgt_poll_group_000" 00:20:57.907 } 00:20:57.907 ]' 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.907 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.908 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.908 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.908 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.166 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:58.166 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.103 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.361 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.362 17:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.928 00:20:59.928 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.928 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.928 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.186 { 00:21:00.186 "auth": { 00:21:00.186 "dhgroup": "ffdhe6144", 00:21:00.186 "digest": "sha512", 00:21:00.186 "state": "completed" 00:21:00.186 }, 00:21:00.186 "cntlid": 129, 00:21:00.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:00.186 "listen_address": { 00:21:00.186 "adrfam": "IPv4", 00:21:00.186 "traddr": "10.0.0.3", 00:21:00.186 "trsvcid": "4420", 00:21:00.186 "trtype": "TCP" 00:21:00.186 }, 00:21:00.186 "peer_address": { 00:21:00.186 "adrfam": "IPv4", 00:21:00.186 "traddr": "10.0.0.1", 00:21:00.186 "trsvcid": "43866", 00:21:00.186 "trtype": "TCP" 00:21:00.186 }, 00:21:00.186 "qid": 0, 00:21:00.186 "state": "enabled", 00:21:00.186 "thread": "nvmf_tgt_poll_group_000" 00:21:00.186 } 00:21:00.186 ]' 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.186 17:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.807 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:21:00.807 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.374 17:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.632 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.633 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.198 00:21:02.198 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.198 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.198 17:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.455 { 00:21:02.455 "auth": { 00:21:02.455 "dhgroup": "ffdhe6144", 00:21:02.455 "digest": "sha512", 00:21:02.455 "state": "completed" 00:21:02.455 }, 00:21:02.455 "cntlid": 131, 00:21:02.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:02.455 "listen_address": { 00:21:02.455 "adrfam": "IPv4", 00:21:02.455 "traddr": "10.0.0.3", 00:21:02.455 "trsvcid": "4420", 00:21:02.455 "trtype": "TCP" 00:21:02.455 }, 00:21:02.455 "peer_address": { 00:21:02.455 "adrfam": "IPv4", 00:21:02.455 "traddr": "10.0.0.1", 00:21:02.455 "trsvcid": "43892", 00:21:02.455 "trtype": "TCP" 00:21:02.455 }, 00:21:02.455 "qid": 0, 00:21:02.455 "state": "enabled", 00:21:02.455 "thread": "nvmf_tgt_poll_group_000" 00:21:02.455 } 00:21:02.455 ]' 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.455 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.456 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.456 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.456 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.713 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.713 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.713 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.972 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:21:02.972 17:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.904 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.161 17:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.728 00:21:04.728 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.728 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.728 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.986 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.987 { 00:21:04.987 "auth": { 00:21:04.987 "dhgroup": "ffdhe6144", 00:21:04.987 "digest": "sha512", 00:21:04.987 "state": "completed" 00:21:04.987 }, 00:21:04.987 "cntlid": 133, 00:21:04.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:04.987 "listen_address": { 00:21:04.987 "adrfam": "IPv4", 00:21:04.987 "traddr": "10.0.0.3", 00:21:04.987 "trsvcid": "4420", 00:21:04.987 "trtype": "TCP" 00:21:04.987 }, 00:21:04.987 "peer_address": { 00:21:04.987 "adrfam": "IPv4", 00:21:04.987 "traddr": "10.0.0.1", 00:21:04.987 "trsvcid": "43920", 00:21:04.987 "trtype": "TCP" 00:21:04.987 }, 00:21:04.987 "qid": 0, 00:21:04.987 "state": "enabled", 00:21:04.987 "thread": "nvmf_tgt_poll_group_000" 00:21:04.987 } 00:21:04.987 ]' 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.987 17:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.552 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:21:05.553 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.118 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.376 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.941 00:21:06.941 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.941 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.941 17:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.506 { 00:21:07.506 "auth": { 00:21:07.506 "dhgroup": "ffdhe6144", 00:21:07.506 "digest": "sha512", 00:21:07.506 "state": "completed" 00:21:07.506 }, 00:21:07.506 "cntlid": 135, 00:21:07.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:07.506 "listen_address": { 00:21:07.506 "adrfam": "IPv4", 00:21:07.506 "traddr": "10.0.0.3", 00:21:07.506 "trsvcid": "4420", 00:21:07.506 "trtype": "TCP" 00:21:07.506 }, 00:21:07.506 "peer_address": { 00:21:07.506 "adrfam": "IPv4", 00:21:07.506 "traddr": "10.0.0.1", 00:21:07.506 "trsvcid": "43940", 00:21:07.506 "trtype": "TCP" 00:21:07.506 }, 00:21:07.506 "qid": 0, 00:21:07.506 "state": "enabled", 00:21:07.506 "thread": "nvmf_tgt_poll_group_000" 00:21:07.506 } 00:21:07.506 ]' 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.506 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.764 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:07.764 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.698 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.956 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.522 00:21:09.781 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.781 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.781 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.039 { 00:21:10.039 "auth": { 00:21:10.039 "dhgroup": "ffdhe8192", 00:21:10.039 "digest": "sha512", 00:21:10.039 "state": "completed" 00:21:10.039 }, 00:21:10.039 "cntlid": 137, 00:21:10.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:10.039 "listen_address": { 00:21:10.039 "adrfam": "IPv4", 00:21:10.039 "traddr": "10.0.0.3", 00:21:10.039 "trsvcid": "4420", 00:21:10.039 "trtype": "TCP" 00:21:10.039 }, 00:21:10.039 "peer_address": { 00:21:10.039 "adrfam": "IPv4", 00:21:10.039 "traddr": "10.0.0.1", 00:21:10.039 "trsvcid": "42678", 00:21:10.039 "trtype": "TCP" 00:21:10.039 }, 00:21:10.039 "qid": 0, 00:21:10.039 "state": "enabled", 00:21:10.039 "thread": "nvmf_tgt_poll_group_000" 00:21:10.039 } 00:21:10.039 ]' 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.039 17:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.605 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:21:10.606 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.172 17:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.431 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.997 00:21:12.255 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.255 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.255 17:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.514 { 00:21:12.514 "auth": { 00:21:12.514 "dhgroup": "ffdhe8192", 00:21:12.514 "digest": "sha512", 00:21:12.514 "state": "completed" 00:21:12.514 }, 00:21:12.514 "cntlid": 139, 00:21:12.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:12.514 "listen_address": { 00:21:12.514 "adrfam": "IPv4", 00:21:12.514 "traddr": "10.0.0.3", 00:21:12.514 "trsvcid": "4420", 00:21:12.514 "trtype": "TCP" 00:21:12.514 }, 00:21:12.514 "peer_address": { 00:21:12.514 "adrfam": "IPv4", 00:21:12.514 "traddr": "10.0.0.1", 00:21:12.514 "trsvcid": "42692", 00:21:12.514 "trtype": "TCP" 00:21:12.514 }, 00:21:12.514 "qid": 0, 00:21:12.514 "state": "enabled", 00:21:12.514 "thread": "nvmf_tgt_poll_group_000" 00:21:12.514 } 00:21:12.514 ]' 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.514 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.082 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:21:13.083 17:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: --dhchap-ctrl-secret DHHC-1:02:MjdmZWM0NTBjNWFmZDI3MTdlODY1MWVkNmFiYzA5NDMxYzNmNzc2MTY1MWEwMDVh2xo8Iw==: 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.649 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.906 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.907 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.842 00:21:14.842 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.842 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.842 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.101 { 00:21:15.101 "auth": { 00:21:15.101 "dhgroup": "ffdhe8192", 00:21:15.101 "digest": "sha512", 00:21:15.101 "state": "completed" 00:21:15.101 }, 00:21:15.101 "cntlid": 141, 00:21:15.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:15.101 "listen_address": { 00:21:15.101 "adrfam": "IPv4", 00:21:15.101 "traddr": "10.0.0.3", 00:21:15.101 "trsvcid": "4420", 00:21:15.101 "trtype": "TCP" 00:21:15.101 }, 00:21:15.101 "peer_address": { 00:21:15.101 "adrfam": "IPv4", 00:21:15.101 "traddr": "10.0.0.1", 00:21:15.101 "trsvcid": "42718", 00:21:15.101 "trtype": "TCP" 00:21:15.101 }, 00:21:15.101 "qid": 0, 00:21:15.101 "state": "enabled", 00:21:15.101 "thread": "nvmf_tgt_poll_group_000" 00:21:15.101 } 00:21:15.101 ]' 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.101 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.359 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:21:15.359 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:01:YTA0MGRmYzQ1YjFjN2EzNjkwYTRhNjZiMjNlNWU5MjItuK8Z: 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.294 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.554 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.120 00:21:17.120 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.120 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.120 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.379 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.379 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.379 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.379 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.379 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.379 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.379 { 00:21:17.379 "auth": { 00:21:17.379 "dhgroup": "ffdhe8192", 00:21:17.379 "digest": "sha512", 00:21:17.379 "state": "completed" 00:21:17.379 }, 00:21:17.379 "cntlid": 143, 00:21:17.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:17.379 "listen_address": { 00:21:17.379 "adrfam": "IPv4", 00:21:17.379 "traddr": "10.0.0.3", 00:21:17.379 "trsvcid": "4420", 00:21:17.379 "trtype": "TCP" 00:21:17.379 }, 00:21:17.379 "peer_address": { 00:21:17.379 "adrfam": "IPv4", 00:21:17.379 "traddr": "10.0.0.1", 00:21:17.379 "trsvcid": "42744", 00:21:17.379 "trtype": "TCP" 00:21:17.379 }, 00:21:17.379 "qid": 0, 00:21:17.379 "state": "enabled", 00:21:17.379 "thread": "nvmf_tgt_poll_group_000" 00:21:17.379 } 00:21:17.379 ]' 00:21:17.379 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.637 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.637 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.637 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.637 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.637 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.637 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.637 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.894 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:17.894 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:18.460 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.460 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:18.460 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.460 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.718 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.718 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:18.718 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:18.718 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:18.718 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.718 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.718 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.978 17:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.544 00:21:19.544 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.544 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.544 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.111 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.111 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.111 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.111 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.111 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.111 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.111 { 00:21:20.111 "auth": { 00:21:20.111 "dhgroup": "ffdhe8192", 00:21:20.111 "digest": "sha512", 00:21:20.111 "state": "completed" 00:21:20.111 }, 00:21:20.111 "cntlid": 145, 00:21:20.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:20.111 "listen_address": { 00:21:20.111 "adrfam": "IPv4", 00:21:20.111 "traddr": "10.0.0.3", 00:21:20.111 "trsvcid": "4420", 00:21:20.111 "trtype": "TCP" 00:21:20.111 }, 00:21:20.111 "peer_address": { 00:21:20.111 "adrfam": "IPv4", 00:21:20.111 "traddr": "10.0.0.1", 00:21:20.111 "trsvcid": "43870", 00:21:20.111 "trtype": "TCP" 00:21:20.111 }, 00:21:20.111 "qid": 0, 00:21:20.112 "state": "enabled", 00:21:20.112 "thread": "nvmf_tgt_poll_group_000" 00:21:20.112 } 00:21:20.112 ]' 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.112 17:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.678 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:21:20.678 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:00:NTVkNGI3NzViYmJlNTk4NDc1NDkxYzViMjM1ZDZkODg2ZGU4YjJlOTIwNDIwMTJhkeI05w==: --dhchap-ctrl-secret DHHC-1:03:N2MxNGM0YjNkNWI2YmVmZmY5YTlhMDE4OGMwMTg1NGQ1MTIxNGRkZjMyN2Q2NzA4ODE0OWQ1NGQwYjU1YjQ4YdiXGzA=: 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.244 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:21.245 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:21.811 2024/11/20 17:42:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:21.811 request: 00:21:21.811 { 00:21:21.811 "method": "bdev_nvme_attach_controller", 00:21:21.811 "params": { 00:21:21.811 "name": "nvme0", 00:21:21.811 "trtype": "tcp", 00:21:21.811 "traddr": "10.0.0.3", 00:21:21.811 "adrfam": "ipv4", 00:21:21.811 "trsvcid": "4420", 00:21:21.811 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:21.811 "prchk_reftag": false, 00:21:21.811 "prchk_guard": false, 00:21:21.811 "hdgst": false, 00:21:21.811 "ddgst": false, 00:21:21.811 "dhchap_key": "key2", 00:21:21.811 "allow_unrecognized_csi": false 00:21:21.811 } 00:21:21.811 } 00:21:21.811 Got JSON-RPC error response 00:21:21.811 GoRPCClient: error on JSON-RPC call 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.811 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.746 2024/11/20 17:42:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:22.746 request: 00:21:22.746 { 00:21:22.746 "method": "bdev_nvme_attach_controller", 00:21:22.746 "params": { 00:21:22.746 "name": "nvme0", 00:21:22.746 "trtype": "tcp", 00:21:22.746 "traddr": "10.0.0.3", 00:21:22.746 "adrfam": "ipv4", 00:21:22.746 "trsvcid": "4420", 00:21:22.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:22.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:22.746 "prchk_reftag": false, 00:21:22.746 "prchk_guard": false, 00:21:22.746 "hdgst": false, 00:21:22.746 "ddgst": false, 00:21:22.746 "dhchap_key": "key1", 00:21:22.746 "dhchap_ctrlr_key": "ckey2", 00:21:22.746 "allow_unrecognized_csi": false 00:21:22.746 } 00:21:22.746 } 00:21:22.746 Got JSON-RPC error response 00:21:22.746 GoRPCClient: error on JSON-RPC call 00:21:22.746 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.747 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.313 2024/11/20 17:42:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:23.313 request: 00:21:23.313 { 00:21:23.313 "method": "bdev_nvme_attach_controller", 00:21:23.313 "params": { 00:21:23.313 "name": "nvme0", 00:21:23.313 "trtype": "tcp", 00:21:23.313 "traddr": "10.0.0.3", 00:21:23.313 "adrfam": "ipv4", 00:21:23.313 "trsvcid": "4420", 00:21:23.313 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:23.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:23.313 "prchk_reftag": false, 00:21:23.313 "prchk_guard": false, 00:21:23.313 "hdgst": false, 00:21:23.313 "ddgst": false, 00:21:23.313 "dhchap_key": "key1", 00:21:23.313 "dhchap_ctrlr_key": "ckey1", 00:21:23.313 "allow_unrecognized_csi": false 00:21:23.313 } 00:21:23.313 } 00:21:23.313 Got JSON-RPC error response 00:21:23.313 GoRPCClient: error on JSON-RPC call 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.313 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 94129 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 94129 ']' 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 94129 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94129 00:21:23.314 killing process with pid 94129 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94129' 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 94129 00:21:23.314 17:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 94129 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=99139 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 99139 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 99139 ']' 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.676 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:23.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 99139 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 99139 ']' 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.937 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.197 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.197 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:24.197 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:24.197 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.197 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.455 null0 00:21:24.455 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.455 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dEk 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.hdf ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdf 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BLt 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.A4z ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A4z 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nDd 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.qVj ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qVj 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EYM 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.456 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.391 nvme0n1 00:21:25.391 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.391 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.391 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.961 { 00:21:25.961 "auth": { 00:21:25.961 "dhgroup": "ffdhe8192", 00:21:25.961 "digest": "sha512", 00:21:25.961 "state": "completed" 00:21:25.961 }, 00:21:25.961 "cntlid": 1, 00:21:25.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:25.961 "listen_address": { 00:21:25.961 "adrfam": "IPv4", 00:21:25.961 "traddr": "10.0.0.3", 00:21:25.961 "trsvcid": "4420", 00:21:25.961 "trtype": "TCP" 00:21:25.961 }, 00:21:25.961 "peer_address": { 00:21:25.961 "adrfam": "IPv4", 00:21:25.961 "traddr": "10.0.0.1", 00:21:25.961 "trsvcid": "43944", 00:21:25.961 "trtype": "TCP" 00:21:25.961 }, 00:21:25.961 "qid": 0, 00:21:25.961 "state": "enabled", 00:21:25.961 "thread": "nvmf_tgt_poll_group_000" 00:21:25.961 } 00:21:25.961 ]' 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.961 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.219 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:26.219 17:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key3 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:27.155 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.413 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.694 2024/11/20 17:42:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:27.694 request: 00:21:27.694 { 00:21:27.694 "method": "bdev_nvme_attach_controller", 00:21:27.694 "params": { 00:21:27.694 "name": "nvme0", 00:21:27.694 "trtype": "tcp", 00:21:27.694 "traddr": "10.0.0.3", 00:21:27.694 "adrfam": "ipv4", 00:21:27.694 "trsvcid": "4420", 00:21:27.694 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:27.694 "prchk_reftag": false, 00:21:27.694 "prchk_guard": false, 00:21:27.694 "hdgst": false, 00:21:27.694 "ddgst": false, 00:21:27.694 "dhchap_key": "key3", 00:21:27.694 "allow_unrecognized_csi": false 00:21:27.694 } 00:21:27.694 } 00:21:27.694 Got JSON-RPC error response 00:21:27.694 GoRPCClient: error on JSON-RPC call 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:27.694 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.953 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.212 2024/11/20 17:42:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:28.212 request: 00:21:28.212 { 00:21:28.212 "method": "bdev_nvme_attach_controller", 00:21:28.212 "params": { 00:21:28.212 "name": "nvme0", 00:21:28.212 "trtype": "tcp", 00:21:28.212 "traddr": "10.0.0.3", 00:21:28.212 "adrfam": "ipv4", 00:21:28.212 "trsvcid": "4420", 00:21:28.212 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:28.212 "prchk_reftag": false, 00:21:28.212 "prchk_guard": false, 00:21:28.212 "hdgst": false, 00:21:28.212 "ddgst": false, 00:21:28.213 "dhchap_key": "key3", 00:21:28.213 "allow_unrecognized_csi": false 00:21:28.213 } 00:21:28.213 } 00:21:28.213 Got JSON-RPC error response 00:21:28.213 GoRPCClient: error on JSON-RPC call 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.472 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.732 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:29.299 2024/11/20 17:42:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:29.299 request: 00:21:29.299 { 00:21:29.299 "method": "bdev_nvme_attach_controller", 00:21:29.299 "params": { 00:21:29.299 "name": "nvme0", 00:21:29.299 "trtype": "tcp", 00:21:29.299 "traddr": "10.0.0.3", 00:21:29.299 "adrfam": "ipv4", 00:21:29.299 "trsvcid": "4420", 00:21:29.299 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:29.299 "prchk_reftag": false, 00:21:29.299 "prchk_guard": false, 00:21:29.299 "hdgst": false, 00:21:29.299 "ddgst": false, 00:21:29.299 "dhchap_key": "key0", 00:21:29.299 "dhchap_ctrlr_key": "key1", 00:21:29.299 "allow_unrecognized_csi": false 00:21:29.299 } 00:21:29.299 } 00:21:29.299 Got JSON-RPC error response 00:21:29.299 GoRPCClient: error on JSON-RPC call 00:21:29.299 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:29.299 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:29.299 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:29.299 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:29.299 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:29.299 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:29.299 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:29.558 nvme0n1 00:21:29.558 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:29.558 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.558 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:30.125 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.125 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.125 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.384 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 00:21:30.384 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.384 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.384 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.384 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:30.384 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:30.384 17:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:31.335 nvme0n1 00:21:31.335 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:31.335 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:31.335 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.617 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:31.876 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.876 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:31.876 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid 331739d4-7eee-415c-8218-055dbccf3f5e -l 0 --dhchap-secret DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: --dhchap-ctrl-secret DHHC-1:03:NDFjZmRhNzc4MjY0Yjc3ZmMxODRhZDU1MjZhODdkNWZkZDQ0NTA5OWI2YzFkMmY2ZWRjOGViYjZhZWI2NzA2MMNKCXE=: 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.812 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.070 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:33.071 17:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:33.639 2024/11/20 17:42:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:33.639 request: 00:21:33.639 { 00:21:33.639 "method": "bdev_nvme_attach_controller", 00:21:33.639 "params": { 00:21:33.639 "name": "nvme0", 00:21:33.639 "trtype": "tcp", 00:21:33.639 "traddr": "10.0.0.3", 00:21:33.639 "adrfam": "ipv4", 00:21:33.639 "trsvcid": "4420", 00:21:33.639 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:33.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e", 00:21:33.639 "prchk_reftag": false, 00:21:33.639 "prchk_guard": false, 00:21:33.639 "hdgst": false, 00:21:33.639 "ddgst": false, 00:21:33.639 "dhchap_key": "key1", 00:21:33.639 "allow_unrecognized_csi": false 00:21:33.639 } 00:21:33.639 } 00:21:33.639 Got JSON-RPC error response 00:21:33.639 GoRPCClient: error on JSON-RPC call 00:21:33.639 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:33.639 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:33.639 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:33.639 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:33.639 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:33.639 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:33.639 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:34.575 nvme0n1 00:21:34.575 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:34.575 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.575 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:35.142 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.142 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.142 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.401 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:35.401 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.401 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.401 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.401 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:35.401 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:35.401 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:35.685 nvme0n1 00:21:35.685 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:35.685 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.685 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:35.944 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.944 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.944 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: '' 2s 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: ]] 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDU1Y2I0ODQ5NDFlYTcyZTg3ZDU5MjRmNDI3NzBiZDl238ny: 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:36.512 17:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: 2s 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: ]] 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzBjNmUyOGJhYmVkNjFhNzkyZjY3NWQyMDgwNzFlM2UyNmMxMmM5OTNjMGJkODY25YlMqw==: 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:38.414 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:40.315 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.573 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:40.573 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.573 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.573 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.573 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.573 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.573 17:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:41.509 nvme0n1 00:21:41.509 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:41.509 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.509 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.509 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.509 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:41.509 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.444 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:42.444 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:42.444 17:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.444 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.444 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:42.444 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.444 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.444 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.444 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:42.444 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:42.702 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:42.702 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:42.702 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:42.960 17:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:43.896 2024/11/20 17:42:25 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:43.896 request: 00:21:43.896 { 00:21:43.896 "method": "bdev_nvme_set_keys", 00:21:43.896 "params": { 00:21:43.896 "name": "nvme0", 00:21:43.896 "dhchap_key": "key1", 00:21:43.896 "dhchap_ctrlr_key": "key3" 00:21:43.896 } 00:21:43.896 } 00:21:43.896 Got JSON-RPC error response 00:21:43.896 GoRPCClient: error on JSON-RPC call 00:21:43.896 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:43.896 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.896 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.896 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.896 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:43.896 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:43.896 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.154 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:44.154 17:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:45.089 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:45.089 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:45.089 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.347 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:46.284 nvme0n1 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.284 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:46.543 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.543 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:46.543 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:47.153 2024/11/20 17:42:28 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:47.153 request: 00:21:47.153 { 00:21:47.153 "method": "bdev_nvme_set_keys", 00:21:47.153 "params": { 00:21:47.153 "name": "nvme0", 00:21:47.153 "dhchap_key": "key2", 00:21:47.153 "dhchap_ctrlr_key": "key0" 00:21:47.153 } 00:21:47.153 } 00:21:47.153 Got JSON-RPC error response 00:21:47.153 GoRPCClient: error on JSON-RPC call 00:21:47.153 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:47.153 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.153 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.153 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.153 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:47.153 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.153 17:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:47.410 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:47.410 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:48.344 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:48.345 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.345 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 94173 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 94173 ']' 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 94173 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94173 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:48.913 killing process with pid 94173 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94173' 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 94173 00:21:48.913 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 94173 00:21:49.482 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:49.482 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:49.482 17:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:49.482 rmmod nvme_tcp 00:21:49.482 rmmod nvme_fabrics 00:21:49.482 rmmod nvme_keyring 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 99139 ']' 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 99139 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 99139 ']' 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 99139 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99139 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:49.482 killing process with pid 99139 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99139' 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 99139 00:21:49.482 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 99139 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:49.741 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.dEk /tmp/spdk.key-sha256.BLt /tmp/spdk.key-sha384.nDd /tmp/spdk.key-sha512.EYM /tmp/spdk.key-sha512.hdf /tmp/spdk.key-sha384.A4z /tmp/spdk.key-sha256.qVj '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:21:50.000 00:21:50.000 real 3m28.311s 00:21:50.000 user 8m25.976s 00:21:50.000 sys 0m27.615s 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.000 ************************************ 00:21:50.000 END TEST nvmf_auth_target 00:21:50.000 ************************************ 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:50.000 17:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:50.260 ************************************ 00:21:50.260 START TEST nvmf_bdevio_no_huge 00:21:50.260 ************************************ 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:50.260 * Looking for test storage... 00:21:50.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:50.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.260 --rc genhtml_branch_coverage=1 00:21:50.260 --rc genhtml_function_coverage=1 00:21:50.260 --rc genhtml_legend=1 00:21:50.260 --rc geninfo_all_blocks=1 00:21:50.260 --rc geninfo_unexecuted_blocks=1 00:21:50.260 00:21:50.260 ' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:50.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.260 --rc genhtml_branch_coverage=1 00:21:50.260 --rc genhtml_function_coverage=1 00:21:50.260 --rc genhtml_legend=1 00:21:50.260 --rc geninfo_all_blocks=1 00:21:50.260 --rc geninfo_unexecuted_blocks=1 00:21:50.260 00:21:50.260 ' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:50.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.260 --rc genhtml_branch_coverage=1 00:21:50.260 --rc genhtml_function_coverage=1 00:21:50.260 --rc genhtml_legend=1 00:21:50.260 --rc geninfo_all_blocks=1 00:21:50.260 --rc geninfo_unexecuted_blocks=1 00:21:50.260 00:21:50.260 ' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:50.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.260 --rc genhtml_branch_coverage=1 00:21:50.260 --rc genhtml_function_coverage=1 00:21:50.260 --rc genhtml_legend=1 00:21:50.260 --rc geninfo_all_blocks=1 00:21:50.260 --rc geninfo_unexecuted_blocks=1 00:21:50.260 00:21:50.260 ' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.260 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.261 17:42:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:50.261 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:50.261 Cannot find device "nvmf_init_br" 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:50.261 Cannot find device "nvmf_init_br2" 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:21:50.261 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:50.520 Cannot find device "nvmf_tgt_br" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.520 Cannot find device "nvmf_tgt_br2" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:50.520 Cannot find device "nvmf_init_br" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:50.520 Cannot find device "nvmf_init_br2" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:50.520 Cannot find device "nvmf_tgt_br" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:50.520 Cannot find device "nvmf_tgt_br2" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:50.520 Cannot find device "nvmf_br" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:50.520 Cannot find device "nvmf_init_if" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:50.520 Cannot find device "nvmf_init_if2" 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:50.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:50.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:50.520 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:50.780 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:50.780 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:21:50.780 00:21:50.780 --- 10.0.0.3 ping statistics --- 00:21:50.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.780 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:50.780 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:50.780 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:21:50.780 00:21:50.780 --- 10.0.0.4 ping statistics --- 00:21:50.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.780 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:50.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:21:50.780 00:21:50.780 --- 10.0.0.1 ping statistics --- 00:21:50.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.780 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:50.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:21:50.780 00:21:50.780 --- 10.0.0.2 ping statistics --- 00:21:50.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.780 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:50.780 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=100015 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 100015 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 100015 ']' 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.781 17:42:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.781 [2024-11-20 17:42:32.469157] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:50.781 [2024-11-20 17:42:32.469291] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:51.055 [2024-11-20 17:42:32.620397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.055 [2024-11-20 17:42:32.750725] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.055 [2024-11-20 17:42:32.750813] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.055 [2024-11-20 17:42:32.750828] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.055 [2024-11-20 17:42:32.750838] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.055 [2024-11-20 17:42:32.750848] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.055 [2024-11-20 17:42:32.751035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:51.055 [2024-11-20 17:42:32.752025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:21:51.055 [2024-11-20 17:42:32.752190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:21:51.055 [2024-11-20 17:42:32.752200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.989 [2024-11-20 17:42:33.559188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.989 Malloc0 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.989 [2024-11-20 17:42:33.604756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:51.989 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:21:51.990 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:21:51.990 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:51.990 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:51.990 { 00:21:51.990 "params": { 00:21:51.990 "name": "Nvme$subsystem", 00:21:51.990 "trtype": "$TEST_TRANSPORT", 00:21:51.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.990 "adrfam": "ipv4", 00:21:51.990 "trsvcid": "$NVMF_PORT", 00:21:51.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.990 "hdgst": ${hdgst:-false}, 00:21:51.990 "ddgst": ${ddgst:-false} 00:21:51.990 }, 00:21:51.990 "method": "bdev_nvme_attach_controller" 00:21:51.990 } 00:21:51.990 EOF 00:21:51.990 )") 00:21:51.990 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:21:51.990 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:21:51.990 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:21:51.990 17:42:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:21:51.990 "params": { 00:21:51.990 "name": "Nvme1", 00:21:51.990 "trtype": "tcp", 00:21:51.990 "traddr": "10.0.0.3", 00:21:51.990 "adrfam": "ipv4", 00:21:51.990 "trsvcid": "4420", 00:21:51.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.990 "hdgst": false, 00:21:51.990 "ddgst": false 00:21:51.990 }, 00:21:51.990 "method": "bdev_nvme_attach_controller" 00:21:51.990 }' 00:21:51.990 [2024-11-20 17:42:33.669337] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:51.990 [2024-11-20 17:42:33.669428] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid100069 ] 00:21:52.248 [2024-11-20 17:42:33.808791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:52.248 [2024-11-20 17:42:33.960741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.248 [2024-11-20 17:42:33.960857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.248 [2024-11-20 17:42:33.960867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.507 I/O targets: 00:21:52.507 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:52.507 00:21:52.507 00:21:52.507 CUnit - A unit testing framework for C - Version 2.1-3 00:21:52.507 http://cunit.sourceforge.net/ 00:21:52.507 00:21:52.507 00:21:52.507 Suite: bdevio tests on: Nvme1n1 00:21:52.507 Test: blockdev write read block ...passed 00:21:52.507 Test: blockdev write zeroes read block ...passed 00:21:52.764 Test: blockdev write zeroes read no split ...passed 00:21:52.764 Test: blockdev write zeroes read split ...passed 00:21:52.764 Test: blockdev write zeroes read split partial ...passed 00:21:52.764 Test: blockdev reset ...[2024-11-20 17:42:34.326824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.764 [2024-11-20 17:42:34.326968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168b30 (9): Bad file descriptor 00:21:52.764 [2024-11-20 17:42:34.345343] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:52.764 passed 00:21:52.764 Test: blockdev write read 8 blocks ...passed 00:21:52.764 Test: blockdev write read size > 128k ...passed 00:21:52.764 Test: blockdev write read invalid size ...passed 00:21:52.764 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.764 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.764 Test: blockdev write read max offset ...passed 00:21:52.764 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.764 Test: blockdev writev readv 8 blocks ...passed 00:21:52.764 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.764 Test: blockdev writev readv block ...passed 00:21:52.764 Test: blockdev writev readv size > 128k ...passed 00:21:52.764 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.765 Test: blockdev comparev and writev ...[2024-11-20 17:42:34.518298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.518365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:52.765 [2024-11-20 17:42:34.518388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.518399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.765 [2024-11-20 17:42:34.518690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.518708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:52.765 [2024-11-20 17:42:34.518725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.518735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:52.765 [2024-11-20 17:42:34.519027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.519045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:52.765 [2024-11-20 17:42:34.519062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.519088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:52.765 [2024-11-20 17:42:34.519366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.519385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:52.765 [2024-11-20 17:42:34.519402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.765 [2024-11-20 17:42:34.519412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.021 passed 00:21:53.021 Test: blockdev nvme passthru rw ...passed 00:21:53.021 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:42:34.604398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.021 [2024-11-20 17:42:34.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.021 [2024-11-20 17:42:34.604560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.021 [2024-11-20 17:42:34.604577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.021 [2024-11-20 17:42:34.604730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.021 [2024-11-20 17:42:34.604749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.021 [2024-11-20 17:42:34.604871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.021 [2024-11-20 17:42:34.604887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.021 passed 00:21:53.021 Test: blockdev nvme admin passthru ...passed 00:21:53.021 Test: blockdev copy ...passed 00:21:53.021 00:21:53.021 Run Summary: Type Total Ran Passed Failed Inactive 00:21:53.021 suites 1 1 n/a 0 0 00:21:53.021 tests 23 23 23 0 0 00:21:53.021 asserts 152 152 152 0 n/a 00:21:53.021 00:21:53.021 Elapsed time = 0.920 seconds 00:21:53.279 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.279 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.279 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.537 rmmod nvme_tcp 00:21:53.537 rmmod nvme_fabrics 00:21:53.537 rmmod nvme_keyring 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 100015 ']' 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 100015 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 100015 ']' 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 100015 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100015 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100015' 00:21:53.537 killing process with pid 100015 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 100015 00:21:53.537 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 100015 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:54.102 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.103 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:21:54.361 00:21:54.361 real 0m4.120s 00:21:54.361 user 0m13.622s 00:21:54.361 sys 0m1.675s 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.361 ************************************ 00:21:54.361 END TEST nvmf_bdevio_no_huge 00:21:54.361 ************************************ 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.361 ************************************ 00:21:54.361 START TEST nvmf_tls 00:21:54.361 ************************************ 00:21:54.361 17:42:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:54.361 * Looking for test storage... 00:21:54.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:54.361 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:54.361 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:21:54.361 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:54.620 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.621 --rc genhtml_branch_coverage=1 00:21:54.621 --rc genhtml_function_coverage=1 00:21:54.621 --rc genhtml_legend=1 00:21:54.621 --rc geninfo_all_blocks=1 00:21:54.621 --rc geninfo_unexecuted_blocks=1 00:21:54.621 00:21:54.621 ' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.621 --rc genhtml_branch_coverage=1 00:21:54.621 --rc genhtml_function_coverage=1 00:21:54.621 --rc genhtml_legend=1 00:21:54.621 --rc geninfo_all_blocks=1 00:21:54.621 --rc geninfo_unexecuted_blocks=1 00:21:54.621 00:21:54.621 ' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.621 --rc genhtml_branch_coverage=1 00:21:54.621 --rc genhtml_function_coverage=1 00:21:54.621 --rc genhtml_legend=1 00:21:54.621 --rc geninfo_all_blocks=1 00:21:54.621 --rc geninfo_unexecuted_blocks=1 00:21:54.621 00:21:54.621 ' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.621 --rc genhtml_branch_coverage=1 00:21:54.621 --rc genhtml_function_coverage=1 00:21:54.621 --rc genhtml_legend=1 00:21:54.621 --rc geninfo_all_blocks=1 00:21:54.621 --rc geninfo_unexecuted_blocks=1 00:21:54.621 00:21:54.621 ' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.621 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.621 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:54.622 Cannot find device "nvmf_init_br" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:54.622 Cannot find device "nvmf_init_br2" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:54.622 Cannot find device "nvmf_tgt_br" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.622 Cannot find device "nvmf_tgt_br2" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:54.622 Cannot find device "nvmf_init_br" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:54.622 Cannot find device "nvmf_init_br2" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:54.622 Cannot find device "nvmf_tgt_br" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:54.622 Cannot find device "nvmf_tgt_br2" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:54.622 Cannot find device "nvmf_br" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:54.622 Cannot find device "nvmf_init_if" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:54.622 Cannot find device "nvmf_init_if2" 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:54.622 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:54.881 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:54.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:21:54.882 00:21:54.882 --- 10.0.0.3 ping statistics --- 00:21:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.882 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:54.882 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:54.882 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:21:54.882 00:21:54.882 --- 10.0.0.4 ping statistics --- 00:21:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.882 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:54.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:54.882 00:21:54.882 --- 10.0.0.1 ping statistics --- 00:21:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.882 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:54.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:54.882 00:21:54.882 --- 10.0.0.2 ping statistics --- 00:21:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.882 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.882 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=100317 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 100317 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100317 ']' 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:55.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:55.141 17:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.141 [2024-11-20 17:42:36.750549] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:55.141 [2024-11-20 17:42:36.750697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.141 [2024-11-20 17:42:36.896913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.399 [2024-11-20 17:42:37.034272] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.399 [2024-11-20 17:42:37.034365] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.399 [2024-11-20 17:42:37.034390] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.399 [2024-11-20 17:42:37.034401] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.399 [2024-11-20 17:42:37.034411] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.399 [2024-11-20 17:42:37.034448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:56.333 17:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:56.592 true 00:21:56.592 17:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.592 17:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:56.851 17:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:56.851 17:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:56.851 17:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:57.110 17:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:57.110 17:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:57.367 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:57.367 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:57.367 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:57.934 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:57.934 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:58.192 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:58.192 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:58.192 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:58.192 17:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:58.451 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:58.451 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:58.451 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:58.710 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:58.710 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:58.980 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:58.980 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:58.980 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:59.273 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.273 17:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Al6YzwUzCj 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.aaCGg2vmU0 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Al6YzwUzCj 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.aaCGg2vmU0 00:21:59.532 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:00.097 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:00.354 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Al6YzwUzCj 00:22:00.354 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Al6YzwUzCj 00:22:00.354 17:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:00.611 [2024-11-20 17:42:42.285250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.611 17:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:00.868 17:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:01.125 [2024-11-20 17:42:42.881550] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.125 [2024-11-20 17:42:42.881914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:01.125 17:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:01.690 malloc0 00:22:01.690 17:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:01.948 17:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Al6YzwUzCj 00:22:02.208 17:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.497 17:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Al6YzwUzCj 00:22:14.729 Initializing NVMe Controllers 00:22:14.729 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.729 Initialization complete. Launching workers. 00:22:14.729 ======================================================== 00:22:14.729 Latency(us) 00:22:14.729 Device Information : IOPS MiB/s Average min max 00:22:14.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9167.80 35.81 6982.29 2845.03 10896.75 00:22:14.729 ======================================================== 00:22:14.729 Total : 9167.80 35.81 6982.29 2845.03 10896.75 00:22:14.729 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Al6YzwUzCj 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Al6YzwUzCj 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100688 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100688 /var/tmp/bdevperf.sock 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100688 ']' 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.729 17:42:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.729 [2024-11-20 17:42:54.434503] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:14.729 [2024-11-20 17:42:54.434616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100688 ] 00:22:14.729 [2024-11-20 17:42:54.574569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.729 [2024-11-20 17:42:54.702725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.729 17:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.729 17:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:14.729 17:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Al6YzwUzCj 00:22:14.729 17:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.729 [2024-11-20 17:42:56.033822] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.729 TLSTESTn1 00:22:14.729 17:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:14.729 Running I/O for 10 seconds... 00:22:16.669 3925.00 IOPS, 15.33 MiB/s [2024-11-20T17:42:59.392Z] 3862.00 IOPS, 15.09 MiB/s [2024-11-20T17:43:00.328Z] 3908.67 IOPS, 15.27 MiB/s [2024-11-20T17:43:01.263Z] 3961.50 IOPS, 15.47 MiB/s [2024-11-20T17:43:02.654Z] 4008.20 IOPS, 15.66 MiB/s [2024-11-20T17:43:03.591Z] 4018.67 IOPS, 15.70 MiB/s [2024-11-20T17:43:04.528Z] 4034.86 IOPS, 15.76 MiB/s [2024-11-20T17:43:05.463Z] 4055.38 IOPS, 15.84 MiB/s [2024-11-20T17:43:06.400Z] 4062.11 IOPS, 15.87 MiB/s [2024-11-20T17:43:06.400Z] 4054.70 IOPS, 15.84 MiB/s 00:22:24.612 Latency(us) 00:22:24.612 [2024-11-20T17:43:06.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.612 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.612 Verification LBA range: start 0x0 length 0x2000 00:22:24.612 TLSTESTn1 : 10.02 4060.06 15.86 0.00 0.00 31465.86 6702.55 32887.16 00:22:24.612 [2024-11-20T17:43:06.400Z] =================================================================================================================== 00:22:24.612 [2024-11-20T17:43:06.400Z] Total : 4060.06 15.86 0.00 0.00 31465.86 6702.55 32887.16 00:22:24.612 { 00:22:24.612 "results": [ 00:22:24.612 { 00:22:24.612 "job": "TLSTESTn1", 00:22:24.612 "core_mask": "0x4", 00:22:24.612 "workload": "verify", 00:22:24.612 "status": "finished", 00:22:24.612 "verify_range": { 00:22:24.612 "start": 0, 00:22:24.612 "length": 8192 00:22:24.612 }, 00:22:24.612 "queue_depth": 128, 00:22:24.612 "io_size": 4096, 00:22:24.612 "runtime": 10.018314, 00:22:24.612 "iops": 4060.0643980613904, 00:22:24.612 "mibps": 15.859626554927306, 00:22:24.612 "io_failed": 0, 00:22:24.612 "io_timeout": 0, 00:22:24.612 "avg_latency_us": 31465.85859905012, 00:22:24.612 "min_latency_us": 6702.545454545455, 00:22:24.612 "max_latency_us": 32887.156363636364 00:22:24.612 } 00:22:24.612 ], 00:22:24.612 "core_count": 1 00:22:24.612 } 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 100688 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100688 ']' 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100688 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100688 00:22:24.612 killing process with pid 100688 00:22:24.612 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.612 00:22:24.612 Latency(us) 00:22:24.612 [2024-11-20T17:43:06.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.612 [2024-11-20T17:43:06.400Z] =================================================================================================================== 00:22:24.612 [2024-11-20T17:43:06.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100688' 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100688 00:22:24.612 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100688 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aaCGg2vmU0 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aaCGg2vmU0 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aaCGg2vmU0 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aaCGg2vmU0 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100847 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100847 /var/tmp/bdevperf.sock 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100847 ']' 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.871 17:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.139 [2024-11-20 17:43:06.701561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:25.139 [2024-11-20 17:43:06.702649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100847 ] 00:22:25.139 [2024-11-20 17:43:06.842486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.405 [2024-11-20 17:43:06.961074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.341 17:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.341 17:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:26.341 17:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aaCGg2vmU0 00:22:26.600 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.600 [2024-11-20 17:43:08.379878] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.858 [2024-11-20 17:43:08.389787] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:26.858 [2024-11-20 17:43:08.390772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1989460 (107): Transport endpoint is not connected 00:22:26.858 [2024-11-20 17:43:08.391761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1989460 (9): Bad file descriptor 00:22:26.858 [2024-11-20 17:43:08.392756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.858 [2024-11-20 17:43:08.392780] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:26.858 [2024-11-20 17:43:08.392793] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:26.858 [2024-11-20 17:43:08.392810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.858 2024/11/20 17:43:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:26.858 request: 00:22:26.858 { 00:22:26.858 "method": "bdev_nvme_attach_controller", 00:22:26.858 "params": { 00:22:26.858 "name": "TLSTEST", 00:22:26.858 "trtype": "tcp", 00:22:26.858 "traddr": "10.0.0.3", 00:22:26.858 "adrfam": "ipv4", 00:22:26.858 "trsvcid": "4420", 00:22:26.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.858 "prchk_reftag": false, 00:22:26.858 "prchk_guard": false, 00:22:26.858 "hdgst": false, 00:22:26.858 "ddgst": false, 00:22:26.858 "psk": "key0", 00:22:26.858 "allow_unrecognized_csi": false 00:22:26.858 } 00:22:26.858 } 00:22:26.858 Got JSON-RPC error response 00:22:26.858 GoRPCClient: error on JSON-RPC call 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100847 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100847 ']' 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100847 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100847 00:22:26.858 killing process with pid 100847 00:22:26.858 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.858 00:22:26.858 Latency(us) 00:22:26.858 [2024-11-20T17:43:08.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.858 [2024-11-20T17:43:08.646Z] =================================================================================================================== 00:22:26.858 [2024-11-20T17:43:08.646Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100847' 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100847 00:22:26.858 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100847 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Al6YzwUzCj 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Al6YzwUzCj 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Al6YzwUzCj 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Al6YzwUzCj 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100905 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100905 /var/tmp/bdevperf.sock 00:22:27.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100905 ']' 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.118 17:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.118 [2024-11-20 17:43:08.807638] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:27.118 [2024-11-20 17:43:08.807751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100905 ] 00:22:27.376 [2024-11-20 17:43:08.942756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.376 [2024-11-20 17:43:09.056735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.340 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.340 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:28.340 17:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Al6YzwUzCj 00:22:28.340 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:28.906 [2024-11-20 17:43:10.413824] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.906 [2024-11-20 17:43:10.419895] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:28.906 [2024-11-20 17:43:10.419942] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:28.906 [2024-11-20 17:43:10.419999] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.906 [2024-11-20 17:43:10.420718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65b460 (107): Transport endpoint is not connected 00:22:28.906 [2024-11-20 17:43:10.421710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65b460 (9): Bad file descriptor 00:22:28.906 [2024-11-20 17:43:10.422705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.906 [2024-11-20 17:43:10.422730] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:28.906 [2024-11-20 17:43:10.422742] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:28.906 [2024-11-20 17:43:10.422759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.906 2024/11/20 17:43:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:28.906 request: 00:22:28.906 { 00:22:28.906 "method": "bdev_nvme_attach_controller", 00:22:28.906 "params": { 00:22:28.906 "name": "TLSTEST", 00:22:28.906 "trtype": "tcp", 00:22:28.906 "traddr": "10.0.0.3", 00:22:28.906 "adrfam": "ipv4", 00:22:28.906 "trsvcid": "4420", 00:22:28.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.906 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:28.906 "prchk_reftag": false, 00:22:28.906 "prchk_guard": false, 00:22:28.906 "hdgst": false, 00:22:28.906 "ddgst": false, 00:22:28.906 "psk": "key0", 00:22:28.906 "allow_unrecognized_csi": false 00:22:28.906 } 00:22:28.906 } 00:22:28.906 Got JSON-RPC error response 00:22:28.906 GoRPCClient: error on JSON-RPC call 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100905 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100905 ']' 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100905 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100905 00:22:28.906 killing process with pid 100905 00:22:28.906 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.906 00:22:28.906 Latency(us) 00:22:28.906 [2024-11-20T17:43:10.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.906 [2024-11-20T17:43:10.694Z] =================================================================================================================== 00:22:28.906 [2024-11-20T17:43:10.694Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100905' 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100905 00:22:28.906 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100905 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Al6YzwUzCj 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Al6YzwUzCj 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Al6YzwUzCj 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Al6YzwUzCj 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100959 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100959 /var/tmp/bdevperf.sock 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 100959 ']' 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.165 17:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.165 [2024-11-20 17:43:10.864598] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:29.165 [2024-11-20 17:43:10.864727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100959 ] 00:22:29.424 [2024-11-20 17:43:11.005391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.424 [2024-11-20 17:43:11.140440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.359 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.359 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:30.359 17:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Al6YzwUzCj 00:22:30.617 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:30.876 [2024-11-20 17:43:12.448098] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.876 [2024-11-20 17:43:12.460145] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:30.876 [2024-11-20 17:43:12.460193] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:30.877 [2024-11-20 17:43:12.460249] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:30.877 [2024-11-20 17:43:12.461010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabc460 (107): Transport endpoint is not connected 00:22:30.877 [2024-11-20 17:43:12.461999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabc460 (9): Bad file descriptor 00:22:30.877 [2024-11-20 17:43:12.462995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:30.877 [2024-11-20 17:43:12.463023] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:30.877 [2024-11-20 17:43:12.463035] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:30.877 [2024-11-20 17:43:12.463051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:30.877 2024/11/20 17:43:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:30.877 request: 00:22:30.877 { 00:22:30.877 "method": "bdev_nvme_attach_controller", 00:22:30.877 "params": { 00:22:30.877 "name": "TLSTEST", 00:22:30.877 "trtype": "tcp", 00:22:30.877 "traddr": "10.0.0.3", 00:22:30.877 "adrfam": "ipv4", 00:22:30.877 "trsvcid": "4420", 00:22:30.877 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:30.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.877 "prchk_reftag": false, 00:22:30.877 "prchk_guard": false, 00:22:30.877 "hdgst": false, 00:22:30.877 "ddgst": false, 00:22:30.877 "psk": "key0", 00:22:30.877 "allow_unrecognized_csi": false 00:22:30.877 } 00:22:30.877 } 00:22:30.877 Got JSON-RPC error response 00:22:30.877 GoRPCClient: error on JSON-RPC call 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100959 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100959 ']' 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100959 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100959 00:22:30.877 killing process with pid 100959 00:22:30.877 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.877 00:22:30.877 Latency(us) 00:22:30.877 [2024-11-20T17:43:12.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.877 [2024-11-20T17:43:12.665Z] =================================================================================================================== 00:22:30.877 [2024-11-20T17:43:12.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100959' 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100959 00:22:30.877 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100959 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=101018 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 101018 /var/tmp/bdevperf.sock 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101018 ']' 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.135 17:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.135 [2024-11-20 17:43:12.889497] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:31.135 [2024-11-20 17:43:12.889626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101018 ] 00:22:31.393 [2024-11-20 17:43:13.024331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.393 [2024-11-20 17:43:13.149646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.328 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.328 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:32.328 17:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:32.587 [2024-11-20 17:43:14.225407] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:32.587 [2024-11-20 17:43:14.225468] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:32.587 2024/11/20 17:43:14 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:32.587 request: 00:22:32.587 { 00:22:32.587 "method": "keyring_file_add_key", 00:22:32.587 "params": { 00:22:32.587 "name": "key0", 00:22:32.587 "path": "" 00:22:32.587 } 00:22:32.587 } 00:22:32.587 Got JSON-RPC error response 00:22:32.587 GoRPCClient: error on JSON-RPC call 00:22:32.587 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.848 [2024-11-20 17:43:14.533598] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.848 [2024-11-20 17:43:14.533674] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:32.848 2024/11/20 17:43:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:22:32.848 request: 00:22:32.848 { 00:22:32.848 "method": "bdev_nvme_attach_controller", 00:22:32.848 "params": { 00:22:32.848 "name": "TLSTEST", 00:22:32.848 "trtype": "tcp", 00:22:32.848 "traddr": "10.0.0.3", 00:22:32.848 "adrfam": "ipv4", 00:22:32.848 "trsvcid": "4420", 00:22:32.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.848 "prchk_reftag": false, 00:22:32.848 "prchk_guard": false, 00:22:32.848 "hdgst": false, 00:22:32.848 "ddgst": false, 00:22:32.848 "psk": "key0", 00:22:32.848 "allow_unrecognized_csi": false 00:22:32.848 } 00:22:32.848 } 00:22:32.848 Got JSON-RPC error response 00:22:32.848 GoRPCClient: error on JSON-RPC call 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 101018 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101018 ']' 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101018 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101018 00:22:32.848 killing process with pid 101018 00:22:32.848 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.848 00:22:32.848 Latency(us) 00:22:32.848 [2024-11-20T17:43:14.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.848 [2024-11-20T17:43:14.636Z] =================================================================================================================== 00:22:32.848 [2024-11-20T17:43:14.636Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101018' 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101018 00:22:32.848 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101018 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 100317 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 100317 ']' 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 100317 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.118 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100317 00:22:33.376 killing process with pid 100317 00:22:33.376 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:33.376 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:33.376 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100317' 00:22:33.376 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 100317 00:22:33.376 17:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 100317 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.uIBRaQWr5B 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.uIBRaQWr5B 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101092 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101092 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101092 ']' 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.635 17:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.635 [2024-11-20 17:43:15.373118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:33.635 [2024-11-20 17:43:15.373259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.893 [2024-11-20 17:43:15.512776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.893 [2024-11-20 17:43:15.635880] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.893 [2024-11-20 17:43:15.636223] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.893 [2024-11-20 17:43:15.636245] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.893 [2024-11-20 17:43:15.636256] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.893 [2024-11-20 17:43:15.636263] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.893 [2024-11-20 17:43:15.636307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.uIBRaQWr5B 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.uIBRaQWr5B 00:22:34.829 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.088 [2024-11-20 17:43:16.678444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.088 17:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.347 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:35.606 [2024-11-20 17:43:17.254665] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.606 [2024-11-20 17:43:17.254972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:35.606 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.865 malloc0 00:22:35.865 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:36.451 17:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:22:36.451 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uIBRaQWr5B 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uIBRaQWr5B 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=101208 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 101208 /var/tmp/bdevperf.sock 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101208 ']' 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.022 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.023 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.023 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.023 17:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.023 [2024-11-20 17:43:18.568805] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:37.023 [2024-11-20 17:43:18.568906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101208 ] 00:22:37.023 [2024-11-20 17:43:18.706427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.288 [2024-11-20 17:43:18.838644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.223 17:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.223 17:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:38.223 17:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:22:38.223 17:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:38.481 [2024-11-20 17:43:20.225143] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.739 TLSTESTn1 00:22:38.739 17:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:38.739 Running I/O for 10 seconds... 00:22:40.671 4061.00 IOPS, 15.86 MiB/s [2024-11-20T17:43:23.849Z] 4020.50 IOPS, 15.71 MiB/s [2024-11-20T17:43:24.783Z] 4086.33 IOPS, 15.96 MiB/s [2024-11-20T17:43:25.717Z] 4075.50 IOPS, 15.92 MiB/s [2024-11-20T17:43:26.652Z] 4090.80 IOPS, 15.98 MiB/s [2024-11-20T17:43:27.587Z] 4090.83 IOPS, 15.98 MiB/s [2024-11-20T17:43:28.523Z] 4090.14 IOPS, 15.98 MiB/s [2024-11-20T17:43:29.458Z] 4088.25 IOPS, 15.97 MiB/s [2024-11-20T17:43:30.834Z] 4092.78 IOPS, 15.99 MiB/s [2024-11-20T17:43:30.834Z] 4098.80 IOPS, 16.01 MiB/s 00:22:49.046 Latency(us) 00:22:49.046 [2024-11-20T17:43:30.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.046 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:49.046 Verification LBA range: start 0x0 length 0x2000 00:22:49.046 TLSTESTn1 : 10.02 4104.20 16.03 0.00 0.00 31127.67 6672.76 30980.65 00:22:49.046 [2024-11-20T17:43:30.834Z] =================================================================================================================== 00:22:49.046 [2024-11-20T17:43:30.834Z] Total : 4104.20 16.03 0.00 0.00 31127.67 6672.76 30980.65 00:22:49.046 { 00:22:49.046 "results": [ 00:22:49.046 { 00:22:49.046 "job": "TLSTESTn1", 00:22:49.046 "core_mask": "0x4", 00:22:49.046 "workload": "verify", 00:22:49.046 "status": "finished", 00:22:49.046 "verify_range": { 00:22:49.046 "start": 0, 00:22:49.046 "length": 8192 00:22:49.046 }, 00:22:49.046 "queue_depth": 128, 00:22:49.046 "io_size": 4096, 00:22:49.046 "runtime": 10.017779, 00:22:49.046 "iops": 4104.203137242297, 00:22:49.046 "mibps": 16.032043504852723, 00:22:49.046 "io_failed": 0, 00:22:49.046 "io_timeout": 0, 00:22:49.046 "avg_latency_us": 31127.66731363249, 00:22:49.046 "min_latency_us": 6672.756363636364, 00:22:49.046 "max_latency_us": 30980.654545454545 00:22:49.046 } 00:22:49.046 ], 00:22:49.046 "core_count": 1 00:22:49.046 } 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 101208 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101208 ']' 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101208 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101208 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:49.046 killing process with pid 101208 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101208' 00:22:49.046 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.046 00:22:49.046 Latency(us) 00:22:49.046 [2024-11-20T17:43:30.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.046 [2024-11-20T17:43:30.834Z] =================================================================================================================== 00:22:49.046 [2024-11-20T17:43:30.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101208 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101208 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.uIBRaQWr5B 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uIBRaQWr5B 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uIBRaQWr5B 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uIBRaQWr5B 00:22:49.046 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uIBRaQWr5B 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=101366 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 101366 /var/tmp/bdevperf.sock 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101366 ']' 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.047 17:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.305 [2024-11-20 17:43:30.890917] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:49.305 [2024-11-20 17:43:30.891048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101366 ] 00:22:49.305 [2024-11-20 17:43:31.033416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.563 [2024-11-20 17:43:31.158134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.499 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.499 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:50.499 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:22:50.757 [2024-11-20 17:43:32.297841] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uIBRaQWr5B': 0100666 00:22:50.757 [2024-11-20 17:43:32.297904] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:50.757 2024/11/20 17:43:32 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.uIBRaQWr5B], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:50.757 request: 00:22:50.757 { 00:22:50.757 "method": "keyring_file_add_key", 00:22:50.757 "params": { 00:22:50.757 "name": "key0", 00:22:50.757 "path": "/tmp/tmp.uIBRaQWr5B" 00:22:50.757 } 00:22:50.757 } 00:22:50.757 Got JSON-RPC error response 00:22:50.757 GoRPCClient: error on JSON-RPC call 00:22:50.757 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:51.015 [2024-11-20 17:43:32.618019] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.015 [2024-11-20 17:43:32.618098] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:51.016 2024/11/20 17:43:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:22:51.016 request: 00:22:51.016 { 00:22:51.016 "method": "bdev_nvme_attach_controller", 00:22:51.016 "params": { 00:22:51.016 "name": "TLSTEST", 00:22:51.016 "trtype": "tcp", 00:22:51.016 "traddr": "10.0.0.3", 00:22:51.016 "adrfam": "ipv4", 00:22:51.016 "trsvcid": "4420", 00:22:51.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.016 "prchk_reftag": false, 00:22:51.016 "prchk_guard": false, 00:22:51.016 "hdgst": false, 00:22:51.016 "ddgst": false, 00:22:51.016 "psk": "key0", 00:22:51.016 "allow_unrecognized_csi": false 00:22:51.016 } 00:22:51.016 } 00:22:51.016 Got JSON-RPC error response 00:22:51.016 GoRPCClient: error on JSON-RPC call 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 101366 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101366 ']' 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101366 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101366 00:22:51.016 killing process with pid 101366 00:22:51.016 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.016 00:22:51.016 Latency(us) 00:22:51.016 [2024-11-20T17:43:32.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.016 [2024-11-20T17:43:32.804Z] =================================================================================================================== 00:22:51.016 [2024-11-20T17:43:32.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101366' 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101366 00:22:51.016 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101366 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 101092 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101092 ']' 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101092 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.275 17:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101092 00:22:51.275 killing process with pid 101092 00:22:51.275 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:51.275 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:51.275 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101092' 00:22:51.275 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101092 00:22:51.275 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101092 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101429 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101429 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101429 ']' 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.842 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.843 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.843 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.843 17:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.843 [2024-11-20 17:43:33.412308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:51.843 [2024-11-20 17:43:33.412445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.843 [2024-11-20 17:43:33.556902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.101 [2024-11-20 17:43:33.683849] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.101 [2024-11-20 17:43:33.684113] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.101 [2024-11-20 17:43:33.684136] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.101 [2024-11-20 17:43:33.684147] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.101 [2024-11-20 17:43:33.684154] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.101 [2024-11-20 17:43:33.684189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.717 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.717 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:52.717 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:52.717 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.717 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.uIBRaQWr5B 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.uIBRaQWr5B 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.uIBRaQWr5B 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.uIBRaQWr5B 00:22:52.975 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:53.233 [2024-11-20 17:43:34.847256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.233 17:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:53.492 17:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:53.750 [2024-11-20 17:43:35.471421] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.750 [2024-11-20 17:43:35.471968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:53.750 17:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:54.317 malloc0 00:22:54.317 17:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:54.576 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:22:54.835 [2024-11-20 17:43:36.562115] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uIBRaQWr5B': 0100666 00:22:54.835 [2024-11-20 17:43:36.562168] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:54.835 2024/11/20 17:43:36 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.uIBRaQWr5B], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:54.835 request: 00:22:54.835 { 00:22:54.835 "method": "keyring_file_add_key", 00:22:54.835 "params": { 00:22:54.835 "name": "key0", 00:22:54.835 "path": "/tmp/tmp.uIBRaQWr5B" 00:22:54.835 } 00:22:54.835 } 00:22:54.835 Got JSON-RPC error response 00:22:54.835 GoRPCClient: error on JSON-RPC call 00:22:54.835 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:55.402 [2024-11-20 17:43:36.910248] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:55.402 [2024-11-20 17:43:36.910332] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:55.402 2024/11/20 17:43:36 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:55.402 request: 00:22:55.402 { 00:22:55.402 "method": "nvmf_subsystem_add_host", 00:22:55.402 "params": { 00:22:55.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.402 "host": "nqn.2016-06.io.spdk:host1", 00:22:55.402 "psk": "key0" 00:22:55.402 } 00:22:55.402 } 00:22:55.402 Got JSON-RPC error response 00:22:55.402 GoRPCClient: error on JSON-RPC call 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 101429 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101429 ']' 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101429 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101429 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:55.402 killing process with pid 101429 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101429' 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101429 00:22:55.402 17:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101429 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.uIBRaQWr5B 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101558 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101558 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101558 ']' 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.661 17:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.661 [2024-11-20 17:43:37.366442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:55.661 [2024-11-20 17:43:37.366569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.920 [2024-11-20 17:43:37.514491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.920 [2024-11-20 17:43:37.648169] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.920 [2024-11-20 17:43:37.648246] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.920 [2024-11-20 17:43:37.648262] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.920 [2024-11-20 17:43:37.648273] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.920 [2024-11-20 17:43:37.648283] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.920 [2024-11-20 17:43:37.648322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.uIBRaQWr5B 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.uIBRaQWr5B 00:22:56.867 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:57.125 [2024-11-20 17:43:38.817586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.125 17:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:57.383 17:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:57.959 [2024-11-20 17:43:39.441811] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:57.959 [2024-11-20 17:43:39.442175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:57.960 17:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:58.217 malloc0 00:22:58.217 17:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:58.475 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:22:58.733 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=101673 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 101673 /var/tmp/bdevperf.sock 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101673 ']' 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.990 17:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.990 [2024-11-20 17:43:40.692763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:58.990 [2024-11-20 17:43:40.692906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101673 ] 00:22:59.247 [2024-11-20 17:43:40.843332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.247 [2024-11-20 17:43:41.008412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.197 17:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.197 17:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:00.197 17:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:23:00.455 17:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.713 [2024-11-20 17:43:42.398156] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.713 TLSTESTn1 00:23:00.971 17:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:01.230 17:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:01.230 "subsystems": [ 00:23:01.230 { 00:23:01.230 "subsystem": "keyring", 00:23:01.230 "config": [ 00:23:01.230 { 00:23:01.230 "method": "keyring_file_add_key", 00:23:01.230 "params": { 00:23:01.230 "name": "key0", 00:23:01.230 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:01.230 } 00:23:01.230 } 00:23:01.230 ] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "iobuf", 00:23:01.230 "config": [ 00:23:01.230 { 00:23:01.230 "method": "iobuf_set_options", 00:23:01.230 "params": { 00:23:01.230 "large_bufsize": 135168, 00:23:01.230 "large_pool_count": 1024, 00:23:01.230 "small_bufsize": 8192, 00:23:01.230 "small_pool_count": 8192 00:23:01.230 } 00:23:01.230 } 00:23:01.230 ] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "sock", 00:23:01.230 "config": [ 00:23:01.230 { 00:23:01.230 "method": "sock_set_default_impl", 00:23:01.230 "params": { 00:23:01.230 "impl_name": "posix" 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "sock_impl_set_options", 00:23:01.230 "params": { 00:23:01.230 "enable_ktls": false, 00:23:01.230 "enable_placement_id": 0, 00:23:01.230 "enable_quickack": false, 00:23:01.230 "enable_recv_pipe": true, 00:23:01.230 "enable_zerocopy_send_client": false, 00:23:01.230 "enable_zerocopy_send_server": true, 00:23:01.230 "impl_name": "ssl", 00:23:01.230 "recv_buf_size": 4096, 00:23:01.230 "send_buf_size": 4096, 00:23:01.230 "tls_version": 0, 00:23:01.230 "zerocopy_threshold": 0 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "sock_impl_set_options", 00:23:01.230 "params": { 00:23:01.230 "enable_ktls": false, 00:23:01.230 "enable_placement_id": 0, 00:23:01.230 "enable_quickack": false, 00:23:01.230 "enable_recv_pipe": true, 00:23:01.230 "enable_zerocopy_send_client": false, 00:23:01.230 "enable_zerocopy_send_server": true, 00:23:01.230 "impl_name": "posix", 00:23:01.230 "recv_buf_size": 2097152, 00:23:01.230 "send_buf_size": 2097152, 00:23:01.230 "tls_version": 0, 00:23:01.230 "zerocopy_threshold": 0 00:23:01.230 } 00:23:01.230 } 00:23:01.230 ] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "vmd", 00:23:01.230 "config": [] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "accel", 00:23:01.230 "config": [ 00:23:01.230 { 00:23:01.230 "method": "accel_set_options", 00:23:01.230 "params": { 00:23:01.230 "buf_count": 2048, 00:23:01.230 "large_cache_size": 16, 00:23:01.230 "sequence_count": 2048, 00:23:01.230 "small_cache_size": 128, 00:23:01.230 "task_count": 2048 00:23:01.230 } 00:23:01.230 } 00:23:01.230 ] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "bdev", 00:23:01.230 "config": [ 00:23:01.230 { 00:23:01.230 "method": "bdev_set_options", 00:23:01.230 "params": { 00:23:01.230 "bdev_auto_examine": true, 00:23:01.230 "bdev_io_cache_size": 256, 00:23:01.230 "bdev_io_pool_size": 65535, 00:23:01.230 "iobuf_large_cache_size": 16, 00:23:01.230 "iobuf_small_cache_size": 128 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "bdev_raid_set_options", 00:23:01.230 "params": { 00:23:01.230 "process_max_bandwidth_mb_sec": 0, 00:23:01.230 "process_window_size_kb": 1024 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "bdev_iscsi_set_options", 00:23:01.230 "params": { 00:23:01.230 "timeout_sec": 30 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "bdev_nvme_set_options", 00:23:01.230 "params": { 00:23:01.230 "action_on_timeout": "none", 00:23:01.230 "allow_accel_sequence": false, 00:23:01.230 "arbitration_burst": 0, 00:23:01.230 "bdev_retry_count": 3, 00:23:01.230 "ctrlr_loss_timeout_sec": 0, 00:23:01.230 "delay_cmd_submit": true, 00:23:01.230 "dhchap_dhgroups": [ 00:23:01.230 "null", 00:23:01.230 "ffdhe2048", 00:23:01.230 "ffdhe3072", 00:23:01.230 "ffdhe4096", 00:23:01.230 "ffdhe6144", 00:23:01.230 "ffdhe8192" 00:23:01.230 ], 00:23:01.230 "dhchap_digests": [ 00:23:01.230 "sha256", 00:23:01.230 "sha384", 00:23:01.230 "sha512" 00:23:01.230 ], 00:23:01.230 "disable_auto_failback": false, 00:23:01.230 "fast_io_fail_timeout_sec": 0, 00:23:01.230 "generate_uuids": false, 00:23:01.230 "high_priority_weight": 0, 00:23:01.230 "io_path_stat": false, 00:23:01.230 "io_queue_requests": 0, 00:23:01.230 "keep_alive_timeout_ms": 10000, 00:23:01.230 "low_priority_weight": 0, 00:23:01.230 "medium_priority_weight": 0, 00:23:01.230 "nvme_adminq_poll_period_us": 10000, 00:23:01.230 "nvme_error_stat": false, 00:23:01.230 "nvme_ioq_poll_period_us": 0, 00:23:01.230 "rdma_cm_event_timeout_ms": 0, 00:23:01.230 "rdma_max_cq_size": 0, 00:23:01.230 "rdma_srq_size": 0, 00:23:01.230 "reconnect_delay_sec": 0, 00:23:01.230 "timeout_admin_us": 0, 00:23:01.230 "timeout_us": 0, 00:23:01.230 "transport_ack_timeout": 0, 00:23:01.230 "transport_retry_count": 4, 00:23:01.230 "transport_tos": 0 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "bdev_nvme_set_hotplug", 00:23:01.230 "params": { 00:23:01.230 "enable": false, 00:23:01.230 "period_us": 100000 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "bdev_malloc_create", 00:23:01.230 "params": { 00:23:01.230 "block_size": 4096, 00:23:01.230 "dif_is_head_of_md": false, 00:23:01.230 "dif_pi_format": 0, 00:23:01.230 "dif_type": 0, 00:23:01.230 "md_size": 0, 00:23:01.230 "name": "malloc0", 00:23:01.230 "num_blocks": 8192, 00:23:01.230 "optimal_io_boundary": 0, 00:23:01.230 "physical_block_size": 4096, 00:23:01.230 "uuid": "e7ad874f-bca3-4029-a010-6fe3b6787dfa" 00:23:01.230 } 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "method": "bdev_wait_for_examine" 00:23:01.230 } 00:23:01.230 ] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "nbd", 00:23:01.230 "config": [] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "scheduler", 00:23:01.230 "config": [ 00:23:01.230 { 00:23:01.230 "method": "framework_set_scheduler", 00:23:01.230 "params": { 00:23:01.230 "name": "static" 00:23:01.230 } 00:23:01.230 } 00:23:01.230 ] 00:23:01.230 }, 00:23:01.230 { 00:23:01.230 "subsystem": "nvmf", 00:23:01.231 "config": [ 00:23:01.231 { 00:23:01.231 "method": "nvmf_set_config", 00:23:01.231 "params": { 00:23:01.231 "admin_cmd_passthru": { 00:23:01.231 "identify_ctrlr": false 00:23:01.231 }, 00:23:01.231 "dhchap_dhgroups": [ 00:23:01.231 "null", 00:23:01.231 "ffdhe2048", 00:23:01.231 "ffdhe3072", 00:23:01.231 "ffdhe4096", 00:23:01.231 "ffdhe6144", 00:23:01.231 "ffdhe8192" 00:23:01.231 ], 00:23:01.231 "dhchap_digests": [ 00:23:01.231 "sha256", 00:23:01.231 "sha384", 00:23:01.231 "sha512" 00:23:01.231 ], 00:23:01.231 "discovery_filter": "match_any" 00:23:01.231 } 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "method": "nvmf_set_max_subsystems", 00:23:01.231 "params": { 00:23:01.231 "max_subsystems": 1024 00:23:01.231 } 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "method": "nvmf_set_crdt", 00:23:01.231 "params": { 00:23:01.231 "crdt1": 0, 00:23:01.231 "crdt2": 0, 00:23:01.231 "crdt3": 0 00:23:01.231 } 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "method": "nvmf_create_transport", 00:23:01.231 "params": { 00:23:01.231 "abort_timeout_sec": 1, 00:23:01.231 "ack_timeout": 0, 00:23:01.231 "buf_cache_size": 4294967295, 00:23:01.231 "c2h_success": false, 00:23:01.231 "data_wr_pool_size": 0, 00:23:01.231 "dif_insert_or_strip": false, 00:23:01.231 "in_capsule_data_size": 4096, 00:23:01.231 "io_unit_size": 131072, 00:23:01.231 "max_aq_depth": 128, 00:23:01.231 "max_io_qpairs_per_ctrlr": 127, 00:23:01.231 "max_io_size": 131072, 00:23:01.231 "max_queue_depth": 128, 00:23:01.231 "num_shared_buffers": 511, 00:23:01.231 "sock_priority": 0, 00:23:01.231 "trtype": "TCP", 00:23:01.231 "zcopy": false 00:23:01.231 } 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "method": "nvmf_create_subsystem", 00:23:01.231 "params": { 00:23:01.231 "allow_any_host": false, 00:23:01.231 "ana_reporting": false, 00:23:01.231 "max_cntlid": 65519, 00:23:01.231 "max_namespaces": 10, 00:23:01.231 "min_cntlid": 1, 00:23:01.231 "model_number": "SPDK bdev Controller", 00:23:01.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.231 "serial_number": "SPDK00000000000001" 00:23:01.231 } 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "method": "nvmf_subsystem_add_host", 00:23:01.231 "params": { 00:23:01.231 "host": "nqn.2016-06.io.spdk:host1", 00:23:01.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.231 "psk": "key0" 00:23:01.231 } 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "method": "nvmf_subsystem_add_ns", 00:23:01.231 "params": { 00:23:01.231 "namespace": { 00:23:01.231 "bdev_name": "malloc0", 00:23:01.231 "nguid": "E7AD874FBCA34029A0106FE3B6787DFA", 00:23:01.231 "no_auto_visible": false, 00:23:01.231 "nsid": 1, 00:23:01.231 "uuid": "e7ad874f-bca3-4029-a010-6fe3b6787dfa" 00:23:01.231 }, 00:23:01.231 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:01.231 } 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "method": "nvmf_subsystem_add_listener", 00:23:01.231 "params": { 00:23:01.231 "listen_address": { 00:23:01.231 "adrfam": "IPv4", 00:23:01.231 "traddr": "10.0.0.3", 00:23:01.231 "trsvcid": "4420", 00:23:01.231 "trtype": "TCP" 00:23:01.231 }, 00:23:01.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.231 "secure_channel": true 00:23:01.231 } 00:23:01.231 } 00:23:01.231 ] 00:23:01.231 } 00:23:01.231 ] 00:23:01.231 }' 00:23:01.231 17:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:01.799 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:01.799 "subsystems": [ 00:23:01.799 { 00:23:01.799 "subsystem": "keyring", 00:23:01.799 "config": [ 00:23:01.799 { 00:23:01.799 "method": "keyring_file_add_key", 00:23:01.799 "params": { 00:23:01.799 "name": "key0", 00:23:01.799 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:01.799 } 00:23:01.799 } 00:23:01.799 ] 00:23:01.799 }, 00:23:01.799 { 00:23:01.799 "subsystem": "iobuf", 00:23:01.799 "config": [ 00:23:01.799 { 00:23:01.799 "method": "iobuf_set_options", 00:23:01.799 "params": { 00:23:01.799 "large_bufsize": 135168, 00:23:01.799 "large_pool_count": 1024, 00:23:01.799 "small_bufsize": 8192, 00:23:01.799 "small_pool_count": 8192 00:23:01.799 } 00:23:01.799 } 00:23:01.799 ] 00:23:01.799 }, 00:23:01.799 { 00:23:01.799 "subsystem": "sock", 00:23:01.799 "config": [ 00:23:01.799 { 00:23:01.799 "method": "sock_set_default_impl", 00:23:01.799 "params": { 00:23:01.799 "impl_name": "posix" 00:23:01.799 } 00:23:01.799 }, 00:23:01.799 { 00:23:01.799 "method": "sock_impl_set_options", 00:23:01.799 "params": { 00:23:01.799 "enable_ktls": false, 00:23:01.799 "enable_placement_id": 0, 00:23:01.799 "enable_quickack": false, 00:23:01.799 "enable_recv_pipe": true, 00:23:01.799 "enable_zerocopy_send_client": false, 00:23:01.799 "enable_zerocopy_send_server": true, 00:23:01.799 "impl_name": "ssl", 00:23:01.799 "recv_buf_size": 4096, 00:23:01.799 "send_buf_size": 4096, 00:23:01.799 "tls_version": 0, 00:23:01.799 "zerocopy_threshold": 0 00:23:01.799 } 00:23:01.799 }, 00:23:01.799 { 00:23:01.799 "method": "sock_impl_set_options", 00:23:01.799 "params": { 00:23:01.799 "enable_ktls": false, 00:23:01.799 "enable_placement_id": 0, 00:23:01.799 "enable_quickack": false, 00:23:01.799 "enable_recv_pipe": true, 00:23:01.799 "enable_zerocopy_send_client": false, 00:23:01.799 "enable_zerocopy_send_server": true, 00:23:01.799 "impl_name": "posix", 00:23:01.799 "recv_buf_size": 2097152, 00:23:01.799 "send_buf_size": 2097152, 00:23:01.799 "tls_version": 0, 00:23:01.799 "zerocopy_threshold": 0 00:23:01.799 } 00:23:01.799 } 00:23:01.799 ] 00:23:01.799 }, 00:23:01.799 { 00:23:01.799 "subsystem": "vmd", 00:23:01.800 "config": [] 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "subsystem": "accel", 00:23:01.800 "config": [ 00:23:01.800 { 00:23:01.800 "method": "accel_set_options", 00:23:01.800 "params": { 00:23:01.800 "buf_count": 2048, 00:23:01.800 "large_cache_size": 16, 00:23:01.800 "sequence_count": 2048, 00:23:01.800 "small_cache_size": 128, 00:23:01.800 "task_count": 2048 00:23:01.800 } 00:23:01.800 } 00:23:01.800 ] 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "subsystem": "bdev", 00:23:01.800 "config": [ 00:23:01.800 { 00:23:01.800 "method": "bdev_set_options", 00:23:01.800 "params": { 00:23:01.800 "bdev_auto_examine": true, 00:23:01.800 "bdev_io_cache_size": 256, 00:23:01.800 "bdev_io_pool_size": 65535, 00:23:01.800 "iobuf_large_cache_size": 16, 00:23:01.800 "iobuf_small_cache_size": 128 00:23:01.800 } 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "method": "bdev_raid_set_options", 00:23:01.800 "params": { 00:23:01.800 "process_max_bandwidth_mb_sec": 0, 00:23:01.800 "process_window_size_kb": 1024 00:23:01.800 } 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "method": "bdev_iscsi_set_options", 00:23:01.800 "params": { 00:23:01.800 "timeout_sec": 30 00:23:01.800 } 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "method": "bdev_nvme_set_options", 00:23:01.800 "params": { 00:23:01.800 "action_on_timeout": "none", 00:23:01.800 "allow_accel_sequence": false, 00:23:01.800 "arbitration_burst": 0, 00:23:01.800 "bdev_retry_count": 3, 00:23:01.800 "ctrlr_loss_timeout_sec": 0, 00:23:01.800 "delay_cmd_submit": true, 00:23:01.800 "dhchap_dhgroups": [ 00:23:01.800 "null", 00:23:01.800 "ffdhe2048", 00:23:01.800 "ffdhe3072", 00:23:01.800 "ffdhe4096", 00:23:01.800 "ffdhe6144", 00:23:01.800 "ffdhe8192" 00:23:01.800 ], 00:23:01.800 "dhchap_digests": [ 00:23:01.800 "sha256", 00:23:01.800 "sha384", 00:23:01.800 "sha512" 00:23:01.800 ], 00:23:01.800 "disable_auto_failback": false, 00:23:01.800 "fast_io_fail_timeout_sec": 0, 00:23:01.800 "generate_uuids": false, 00:23:01.800 "high_priority_weight": 0, 00:23:01.800 "io_path_stat": false, 00:23:01.800 "io_queue_requests": 512, 00:23:01.800 "keep_alive_timeout_ms": 10000, 00:23:01.800 "low_priority_weight": 0, 00:23:01.800 "medium_priority_weight": 0, 00:23:01.800 "nvme_adminq_poll_period_us": 10000, 00:23:01.800 "nvme_error_stat": false, 00:23:01.800 "nvme_ioq_poll_period_us": 0, 00:23:01.800 "rdma_cm_event_timeout_ms": 0, 00:23:01.800 "rdma_max_cq_size": 0, 00:23:01.800 "rdma_srq_size": 0, 00:23:01.800 "reconnect_delay_sec": 0, 00:23:01.800 "timeout_admin_us": 0, 00:23:01.800 "timeout_us": 0, 00:23:01.800 "transport_ack_timeout": 0, 00:23:01.800 "transport_retry_count": 4, 00:23:01.800 "transport_tos": 0 00:23:01.800 } 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "method": "bdev_nvme_attach_controller", 00:23:01.800 "params": { 00:23:01.800 "adrfam": "IPv4", 00:23:01.800 "ctrlr_loss_timeout_sec": 0, 00:23:01.800 "ddgst": false, 00:23:01.800 "fast_io_fail_timeout_sec": 0, 00:23:01.800 "hdgst": false, 00:23:01.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.800 "name": "TLSTEST", 00:23:01.800 "prchk_guard": false, 00:23:01.800 "prchk_reftag": false, 00:23:01.800 "psk": "key0", 00:23:01.800 "reconnect_delay_sec": 0, 00:23:01.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.800 "traddr": "10.0.0.3", 00:23:01.800 "trsvcid": "4420", 00:23:01.800 "trtype": "TCP" 00:23:01.800 } 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "method": "bdev_nvme_set_hotplug", 00:23:01.800 "params": { 00:23:01.800 "enable": false, 00:23:01.800 "period_us": 100000 00:23:01.800 } 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "method": "bdev_wait_for_examine" 00:23:01.800 } 00:23:01.800 ] 00:23:01.800 }, 00:23:01.800 { 00:23:01.800 "subsystem": "nbd", 00:23:01.800 "config": [] 00:23:01.800 } 00:23:01.800 ] 00:23:01.800 }' 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 101673 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101673 ']' 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101673 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101673 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101673' 00:23:01.800 killing process with pid 101673 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101673 00:23:01.800 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.800 00:23:01.800 Latency(us) 00:23:01.800 [2024-11-20T17:43:43.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.800 [2024-11-20T17:43:43.588Z] =================================================================================================================== 00:23:01.800 [2024-11-20T17:43:43.588Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.800 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101673 00:23:02.059 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 101558 00:23:02.059 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101558 ']' 00:23:02.059 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101558 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101558 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:02.060 killing process with pid 101558 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101558' 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101558 00:23:02.060 17:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101558 00:23:02.318 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:02.319 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:02.319 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:02.319 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:02.319 "subsystems": [ 00:23:02.319 { 00:23:02.319 "subsystem": "keyring", 00:23:02.319 "config": [ 00:23:02.319 { 00:23:02.319 "method": "keyring_file_add_key", 00:23:02.319 "params": { 00:23:02.319 "name": "key0", 00:23:02.319 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:02.319 } 00:23:02.319 } 00:23:02.319 ] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "iobuf", 00:23:02.319 "config": [ 00:23:02.319 { 00:23:02.319 "method": "iobuf_set_options", 00:23:02.319 "params": { 00:23:02.319 "large_bufsize": 135168, 00:23:02.319 "large_pool_count": 1024, 00:23:02.319 "small_bufsize": 8192, 00:23:02.319 "small_pool_count": 8192 00:23:02.319 } 00:23:02.319 } 00:23:02.319 ] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "sock", 00:23:02.319 "config": [ 00:23:02.319 { 00:23:02.319 "method": "sock_set_default_impl", 00:23:02.319 "params": { 00:23:02.319 "impl_name": "posix" 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "sock_impl_set_options", 00:23:02.319 "params": { 00:23:02.319 "enable_ktls": false, 00:23:02.319 "enable_placement_id": 0, 00:23:02.319 "enable_quickack": false, 00:23:02.319 "enable_recv_pipe": true, 00:23:02.319 "enable_zerocopy_send_client": false, 00:23:02.319 "enable_zerocopy_send_server": true, 00:23:02.319 "impl_name": "ssl", 00:23:02.319 "recv_buf_size": 4096, 00:23:02.319 "send_buf_size": 4096, 00:23:02.319 "tls_version": 0, 00:23:02.319 "zerocopy_threshold": 0 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "sock_impl_set_options", 00:23:02.319 "params": { 00:23:02.319 "enable_ktls": false, 00:23:02.319 "enable_placement_id": 0, 00:23:02.319 "enable_quickack": false, 00:23:02.319 "enable_recv_pipe": true, 00:23:02.319 "enable_zerocopy_send_client": false, 00:23:02.319 "enable_zerocopy_send_server": true, 00:23:02.319 "impl_name": "posix", 00:23:02.319 "recv_buf_size": 2097152, 00:23:02.319 "send_buf_size": 2097152, 00:23:02.319 "tls_version": 0, 00:23:02.319 "zerocopy_threshold": 0 00:23:02.319 } 00:23:02.319 } 00:23:02.319 ] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "vmd", 00:23:02.319 "config": [] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "accel", 00:23:02.319 "config": [ 00:23:02.319 { 00:23:02.319 "method": "accel_set_options", 00:23:02.319 "params": { 00:23:02.319 "buf_count": 2048, 00:23:02.319 "large_cache_size": 16, 00:23:02.319 "sequence_count": 2048, 00:23:02.319 "small_cache_size": 128, 00:23:02.319 "task_count": 2048 00:23:02.319 } 00:23:02.319 } 00:23:02.319 ] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "bdev", 00:23:02.319 "config": [ 00:23:02.319 { 00:23:02.319 "method": "bdev_set_options", 00:23:02.319 "params": { 00:23:02.319 "bdev_auto_examine": true, 00:23:02.319 "bdev_io_cache_size": 256, 00:23:02.319 "bdev_io_pool_size": 65535, 00:23:02.319 "iobuf_large_cache_size": 16, 00:23:02.319 "iobuf_small_cache_size": 128 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "bdev_raid_set_options", 00:23:02.319 "params": { 00:23:02.319 "process_max_bandwidth_mb_sec": 0, 00:23:02.319 "process_window_size_kb": 1024 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "bdev_iscsi_set_options", 00:23:02.319 "params": { 00:23:02.319 "timeout_sec": 30 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "bdev_nvme_set_options", 00:23:02.319 "params": { 00:23:02.319 "action_on_timeout": "none", 00:23:02.319 "allow_accel_sequence": false, 00:23:02.319 "arbitration_burst": 0, 00:23:02.319 "bdev_retry_count": 3, 00:23:02.319 "ctrlr_loss_timeout_sec": 0, 00:23:02.319 "delay_cmd_submit": true, 00:23:02.319 "dhchap_dhgroups": [ 00:23:02.319 "null", 00:23:02.319 "ffdhe2048", 00:23:02.319 "ffdhe3072", 00:23:02.319 "ffdhe4096", 00:23:02.319 "ffdhe6144", 00:23:02.319 "ffdhe8192" 00:23:02.319 ], 00:23:02.319 "dhchap_digests": [ 00:23:02.319 "sha256", 00:23:02.319 "sha384", 00:23:02.319 "sha512" 00:23:02.319 ], 00:23:02.319 "disable_auto_failback": false, 00:23:02.319 "fast_io_fail_timeout_sec": 0, 00:23:02.319 "generate_uuids": false, 00:23:02.319 "high_priority_weight": 0, 00:23:02.319 "io_path_stat": false, 00:23:02.319 "io_queue_requests": 0, 00:23:02.319 "keep_alive_timeout_ms": 10000, 00:23:02.319 "low_priority_weight": 0, 00:23:02.319 "medium_priority_weight": 0, 00:23:02.319 "nvme_adminq_poll_period_us": 10000, 00:23:02.319 "nvme_error_stat": false, 00:23:02.319 "nvme_ioq_poll_period_us": 0, 00:23:02.319 "rdma_cm_event_timeout_ms": 0, 00:23:02.319 "rdma_max_cq_size": 0, 00:23:02.319 "rdma_srq_size": 0, 00:23:02.319 "reconnect_delay_sec": 0, 00:23:02.319 "timeout_admin_us": 0, 00:23:02.319 "timeout_us": 0, 00:23:02.319 "transport_ack_timeout": 0, 00:23:02.319 "transport_retry_count": 4, 00:23:02.319 "transport_tos": 0 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "bdev_nvme_set_hotplug", 00:23:02.319 "params": { 00:23:02.319 "enable": false, 00:23:02.319 "period_us": 100000 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "bdev_malloc_create", 00:23:02.319 "params": { 00:23:02.319 "block_size": 4096, 00:23:02.319 "dif_is_head_of_md": false, 00:23:02.319 "dif_pi_format": 0, 00:23:02.319 "dif_type": 0, 00:23:02.319 "md_size": 0, 00:23:02.319 "name": "malloc0", 00:23:02.319 "num_blocks": 8192, 00:23:02.319 "optimal_io_boundary": 0, 00:23:02.319 "physical_block_size": 4096, 00:23:02.319 "uuid": "e7ad874f-bca3-4029-a010-6fe3b6787dfa" 00:23:02.319 } 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "method": "bdev_wait_for_examine" 00:23:02.319 } 00:23:02.319 ] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "nbd", 00:23:02.319 "config": [] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "scheduler", 00:23:02.319 "config": [ 00:23:02.319 { 00:23:02.319 "method": "framework_set_scheduler", 00:23:02.319 "params": { 00:23:02.319 "name": "static" 00:23:02.319 } 00:23:02.319 } 00:23:02.319 ] 00:23:02.319 }, 00:23:02.319 { 00:23:02.319 "subsystem": "nvmf", 00:23:02.319 "config": [ 00:23:02.319 { 00:23:02.319 "method": "nvmf_set_config", 00:23:02.319 "params": { 00:23:02.319 "admin_cmd_passthru": { 00:23:02.319 "identify_ctrlr": false 00:23:02.319 }, 00:23:02.319 "dhchap_dhgroups": [ 00:23:02.319 "null", 00:23:02.319 "ffdhe2048", 00:23:02.319 "ffdhe3072", 00:23:02.319 "ffdhe4096", 00:23:02.319 "ffdhe6144", 00:23:02.319 "ffdhe8192" 00:23:02.319 ], 00:23:02.319 "dhchap_digests": [ 00:23:02.319 "sha256", 00:23:02.319 "sha384", 00:23:02.319 "sha512" 00:23:02.319 ], 00:23:02.319 "discovery_filter": "match_any" 00:23:02.320 } 00:23:02.320 }, 00:23:02.320 { 00:23:02.320 "method": "nvmf_set_max_subsystems", 00:23:02.320 "params": { 00:23:02.320 "max_subsystems": 1024 00:23:02.320 } 00:23:02.320 }, 00:23:02.320 { 00:23:02.320 "method": "nvmf_set_crdt", 00:23:02.320 "params": { 00:23:02.320 "crdt1": 0, 00:23:02.320 "crdt2": 0, 00:23:02.320 "crdt3": 0 00:23:02.320 } 00:23:02.320 }, 00:23:02.320 { 00:23:02.320 "method": "nvmf_create_transport", 00:23:02.320 "params": { 00:23:02.320 "abort_timeout_sec": 1, 00:23:02.320 "ack_timeout": 0, 00:23:02.320 "buf_cache_size": 4294967295, 00:23:02.320 "c2h_success": false, 00:23:02.320 "data_wr_pool_size": 0, 00:23:02.320 "dif_insert_or_strip": false, 00:23:02.320 "in_capsule_data_size": 4096, 00:23:02.320 "io_unit_size": 131072, 00:23:02.320 "max_aq_depth": 128, 00:23:02.320 "max_io_qpairs_per_ctrlr": 127, 00:23:02.320 "max_io_size": 131072, 00:23:02.320 "max_queue_depth": 128, 00:23:02.320 "num_shared_buffers": 511, 00:23:02.320 "sock_priority": 0, 00:23:02.320 "trtype": "TCP", 00:23:02.320 "zcopy": false 00:23:02.320 } 00:23:02.320 }, 00:23:02.320 { 00:23:02.320 "method": "nvmf_create_subsystem", 00:23:02.320 "params": { 00:23:02.320 "allow_any_host": false, 00:23:02.320 "ana_reporting": false, 00:23:02.320 "max_cntlid": 65519, 00:23:02.320 "max_namespaces": 10, 00:23:02.320 "min_cntlid": 1, 00:23:02.320 "model_number": "SPDK bdev Controller", 00:23:02.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.320 "serial_number": "SPDK00000000000001" 00:23:02.320 } 00:23:02.320 }, 00:23:02.320 { 00:23:02.320 "method": "nvmf_subsystem_add_host", 00:23:02.320 "params": { 00:23:02.320 "host": "nqn.2016-06.io.spdk:host1", 00:23:02.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.320 "psk": "key0" 00:23:02.320 } 00:23:02.320 }, 00:23:02.320 { 00:23:02.320 "method": "nvmf_subsystem_add_ns", 00:23:02.320 "params": { 00:23:02.320 "namespace": { 00:23:02.320 "bdev_name": "malloc0", 00:23:02.320 "nguid": "E7AD874FBCA34029A0106FE3B6787DFA", 00:23:02.320 "no_auto_visible": false, 00:23:02.320 "nsid": 1, 00:23:02.320 "uuid": "e7ad874f-bca3-4029-a010-6fe3b6787dfa" 00:23:02.320 }, 00:23:02.320 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:02.320 } 00:23:02.320 }, 00:23:02.320 { 00:23:02.320 "method": "nvmf_subsystem_add_listener", 00:23:02.320 "params": { 00:23:02.320 "listen_address": { 00:23:02.320 "adrfam": "IPv4", 00:23:02.320 "traddr": "10.0.0.3", 00:23:02.320 "trsvcid": "4420", 00:23:02.320 "trtype": "TCP" 00:23:02.320 }, 00:23:02.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.320 "secure_channel": true 00:23:02.320 } 00:23:02.320 } 00:23:02.320 ] 00:23:02.320 } 00:23:02.320 ] 00:23:02.320 }' 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101770 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101770 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101770 ']' 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.320 17:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.578 [2024-11-20 17:43:44.115867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:02.578 [2024-11-20 17:43:44.116016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.578 [2024-11-20 17:43:44.260482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.836 [2024-11-20 17:43:44.396326] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.836 [2024-11-20 17:43:44.396408] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.836 [2024-11-20 17:43:44.396422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.836 [2024-11-20 17:43:44.396433] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.836 [2024-11-20 17:43:44.396442] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.836 [2024-11-20 17:43:44.396550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.095 [2024-11-20 17:43:44.686440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.095 [2024-11-20 17:43:44.724482] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.095 [2024-11-20 17:43:44.724774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=101814 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 101814 /var/tmp/bdevperf.sock 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101814 ']' 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:03.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.662 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:03.662 "subsystems": [ 00:23:03.662 { 00:23:03.662 "subsystem": "keyring", 00:23:03.662 "config": [ 00:23:03.662 { 00:23:03.662 "method": "keyring_file_add_key", 00:23:03.662 "params": { 00:23:03.662 "name": "key0", 00:23:03.662 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:03.662 } 00:23:03.662 } 00:23:03.662 ] 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "subsystem": "iobuf", 00:23:03.662 "config": [ 00:23:03.662 { 00:23:03.662 "method": "iobuf_set_options", 00:23:03.662 "params": { 00:23:03.662 "large_bufsize": 135168, 00:23:03.662 "large_pool_count": 1024, 00:23:03.662 "small_bufsize": 8192, 00:23:03.662 "small_pool_count": 8192 00:23:03.662 } 00:23:03.662 } 00:23:03.662 ] 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "subsystem": "sock", 00:23:03.662 "config": [ 00:23:03.662 { 00:23:03.662 "method": "sock_set_default_impl", 00:23:03.662 "params": { 00:23:03.662 "impl_name": "posix" 00:23:03.662 } 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "method": "sock_impl_set_options", 00:23:03.662 "params": { 00:23:03.662 "enable_ktls": false, 00:23:03.662 "enable_placement_id": 0, 00:23:03.662 "enable_quickack": false, 00:23:03.662 "enable_recv_pipe": true, 00:23:03.662 "enable_zerocopy_send_client": false, 00:23:03.662 "enable_zerocopy_send_server": true, 00:23:03.662 "impl_name": "ssl", 00:23:03.662 "recv_buf_size": 4096, 00:23:03.662 "send_buf_size": 4096, 00:23:03.662 "tls_version": 0, 00:23:03.662 "zerocopy_threshold": 0 00:23:03.662 } 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "method": "sock_impl_set_options", 00:23:03.662 "params": { 00:23:03.662 "enable_ktls": false, 00:23:03.662 "enable_placement_id": 0, 00:23:03.662 "enable_quickack": false, 00:23:03.662 "enable_recv_pipe": true, 00:23:03.662 "enable_zerocopy_send_client": false, 00:23:03.662 "enable_zerocopy_send_server": true, 00:23:03.662 "impl_name": "posix", 00:23:03.662 "recv_buf_size": 2097152, 00:23:03.662 "send_buf_size": 2097152, 00:23:03.662 "tls_version": 0, 00:23:03.662 "zerocopy_threshold": 0 00:23:03.662 } 00:23:03.662 } 00:23:03.662 ] 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "subsystem": "vmd", 00:23:03.662 "config": [] 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "subsystem": "accel", 00:23:03.662 "config": [ 00:23:03.662 { 00:23:03.662 "method": "accel_set_options", 00:23:03.662 "params": { 00:23:03.662 "buf_count": 2048, 00:23:03.662 "large_cache_size": 16, 00:23:03.662 "sequence_count": 2048, 00:23:03.662 "small_cache_size": 128, 00:23:03.662 "task_count": 2048 00:23:03.662 } 00:23:03.662 } 00:23:03.662 ] 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "subsystem": "bdev", 00:23:03.662 "config": [ 00:23:03.662 { 00:23:03.662 "method": "bdev_set_options", 00:23:03.662 "params": { 00:23:03.662 "bdev_auto_examine": true, 00:23:03.662 "bdev_io_cache_size": 256, 00:23:03.662 "bdev_io_pool_size": 65535, 00:23:03.662 "iobuf_large_cache_size": 16, 00:23:03.662 "iobuf_small_cache_size": 128 00:23:03.662 } 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "method": "bdev_raid_set_options", 00:23:03.662 "params": { 00:23:03.662 "process_max_bandwidth_mb_sec": 0, 00:23:03.662 "process_window_size_kb": 1024 00:23:03.662 } 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "method": "bdev_iscsi_set_options", 00:23:03.662 "params": { 00:23:03.662 "timeout_sec": 30 00:23:03.662 } 00:23:03.662 }, 00:23:03.662 { 00:23:03.662 "method": "bdev_nvme_set_options", 00:23:03.662 "params": { 00:23:03.662 "action_on_timeout": "none", 00:23:03.662 "allow_accel_sequence": false, 00:23:03.662 "arbitration_burst": 0, 00:23:03.662 "bdev_retry_count": 3, 00:23:03.662 "ctrlr_loss_timeout_sec": 0, 00:23:03.662 "delay_cmd_submit": true, 00:23:03.662 "dhchap_dhgroups": [ 00:23:03.662 "null", 00:23:03.662 "ffdhe2048", 00:23:03.662 "ffdhe3072", 00:23:03.662 "ffdhe4096", 00:23:03.662 "ffdhe6144", 00:23:03.662 "ffdhe8192" 00:23:03.662 ], 00:23:03.662 "dhchap_digests": [ 00:23:03.662 "sha256", 00:23:03.662 "sha384", 00:23:03.662 "sha512" 00:23:03.662 ], 00:23:03.662 "disable_auto_failback": false, 00:23:03.662 "fast_io_fail_timeout_sec": 0, 00:23:03.662 "generate_uuids": false, 00:23:03.662 "high_priority_weight": 0, 00:23:03.662 "io_path_stat": false, 00:23:03.662 "io_queue_requests": 512, 00:23:03.662 "keep_alive_timeout_ms": 10000, 00:23:03.662 "low_priority_weight": 0, 00:23:03.662 "medium_priority_weight": 0, 00:23:03.662 "nvme_adminq_poll_period_us": 10000, 00:23:03.662 "nvme_error_stat": false, 00:23:03.662 "nvme_ioq_poll_period_us": 0, 00:23:03.662 "rdma_cm_event_timeout_ms": 0, 00:23:03.662 "rdma_max_cq_size": 0, 00:23:03.662 "rdma_srq_size": 0, 00:23:03.662 "reconnect_delay_sec": 0, 00:23:03.662 "timeout_admin_us": 0, 00:23:03.662 "timeout_us": 0, 00:23:03.662 "transport_ack_timeout": 0, 00:23:03.663 "transport_retry_count": 4, 00:23:03.663 "transport_tos": 0 00:23:03.663 } 00:23:03.663 }, 00:23:03.663 { 00:23:03.663 "method": "bdev_nvme_attach_controller", 00:23:03.663 "params": { 00:23:03.663 "adrfam": "IPv4", 00:23:03.663 "ctrlr_loss_timeout_sec": 0, 00:23:03.663 "ddgst": false, 00:23:03.663 "fast_io_fail_timeout_sec": 0, 00:23:03.663 "hdgst": false, 00:23:03.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.663 "name": "TLSTEST", 00:23:03.663 "prchk_guard": false, 00:23:03.663 "prchk_reftag": false, 00:23:03.663 "psk": "key0", 00:23:03.663 "reconnect_delay_sec": 0, 00:23:03.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.663 "traddr": "10.0.0.3", 00:23:03.663 "trsvcid": "4420", 00:23:03.663 "trtype": "TCP" 00:23:03.663 } 00:23:03.663 }, 00:23:03.663 { 00:23:03.663 "method": "bdev_nvme_set_hotplug", 00:23:03.663 "params": { 00:23:03.663 "enable": false, 00:23:03.663 "period_us": 100000 00:23:03.663 } 00:23:03.663 }, 00:23:03.663 { 00:23:03.663 "method": "bdev_wait_for_examine" 00:23:03.663 } 00:23:03.663 ] 00:23:03.663 }, 00:23:03.663 { 00:23:03.663 "subsystem": "nbd", 00:23:03.663 "config": [] 00:23:03.663 } 00:23:03.663 ] 00:23:03.663 }' 00:23:03.663 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.663 17:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.663 [2024-11-20 17:43:45.386103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:03.663 [2024-11-20 17:43:45.386261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101814 ] 00:23:03.923 [2024-11-20 17:43:45.530060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.923 [2024-11-20 17:43:45.663394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.181 [2024-11-20 17:43:45.881039] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.749 17:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.749 17:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:04.749 17:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:05.007 Running I/O for 10 seconds... 00:23:06.903 3868.00 IOPS, 15.11 MiB/s [2024-11-20T17:43:49.769Z] 3777.50 IOPS, 14.76 MiB/s [2024-11-20T17:43:50.702Z] 3800.33 IOPS, 14.85 MiB/s [2024-11-20T17:43:51.635Z] 3842.50 IOPS, 15.01 MiB/s [2024-11-20T17:43:53.009Z] 3834.60 IOPS, 14.98 MiB/s [2024-11-20T17:43:53.948Z] 3859.33 IOPS, 15.08 MiB/s [2024-11-20T17:43:54.882Z] 3885.71 IOPS, 15.18 MiB/s [2024-11-20T17:43:55.817Z] 3906.38 IOPS, 15.26 MiB/s [2024-11-20T17:43:56.753Z] 3960.67 IOPS, 15.47 MiB/s [2024-11-20T17:43:56.753Z] 3986.40 IOPS, 15.57 MiB/s 00:23:14.965 Latency(us) 00:23:14.965 [2024-11-20T17:43:56.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.965 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:14.965 Verification LBA range: start 0x0 length 0x2000 00:23:14.965 TLSTESTn1 : 10.02 3992.73 15.60 0.00 0.00 32000.78 5779.08 25022.84 00:23:14.965 [2024-11-20T17:43:56.753Z] =================================================================================================================== 00:23:14.965 [2024-11-20T17:43:56.753Z] Total : 3992.73 15.60 0.00 0.00 32000.78 5779.08 25022.84 00:23:14.965 { 00:23:14.965 "results": [ 00:23:14.965 { 00:23:14.965 "job": "TLSTESTn1", 00:23:14.965 "core_mask": "0x4", 00:23:14.965 "workload": "verify", 00:23:14.965 "status": "finished", 00:23:14.965 "verify_range": { 00:23:14.965 "start": 0, 00:23:14.965 "length": 8192 00:23:14.965 }, 00:23:14.965 "queue_depth": 128, 00:23:14.965 "io_size": 4096, 00:23:14.965 "runtime": 10.016199, 00:23:14.965 "iops": 3992.73217315271, 00:23:14.965 "mibps": 15.596610051377773, 00:23:14.965 "io_failed": 0, 00:23:14.965 "io_timeout": 0, 00:23:14.965 "avg_latency_us": 32000.784168470058, 00:23:14.965 "min_latency_us": 5779.083636363636, 00:23:14.965 "max_latency_us": 25022.836363636365 00:23:14.965 } 00:23:14.965 ], 00:23:14.965 "core_count": 1 00:23:14.965 } 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 101814 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101814 ']' 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101814 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101814 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:14.965 killing process with pid 101814 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101814' 00:23:14.965 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.965 00:23:14.965 Latency(us) 00:23:14.965 [2024-11-20T17:43:56.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.965 [2024-11-20T17:43:56.753Z] =================================================================================================================== 00:23:14.965 [2024-11-20T17:43:56.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101814 00:23:14.965 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101814 00:23:15.226 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 101770 00:23:15.226 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101770 ']' 00:23:15.226 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101770 00:23:15.226 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:15.226 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.226 17:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101770 00:23:15.488 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:15.488 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:15.488 killing process with pid 101770 00:23:15.488 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101770' 00:23:15.488 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101770 00:23:15.488 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101770 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=101960 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 101960 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 101960 ']' 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.747 17:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.747 [2024-11-20 17:43:57.403250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:15.747 [2024-11-20 17:43:57.403354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.005 [2024-11-20 17:43:57.538863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.005 [2024-11-20 17:43:57.640205] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.005 [2024-11-20 17:43:57.640281] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.005 [2024-11-20 17:43:57.640296] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.005 [2024-11-20 17:43:57.640307] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.005 [2024-11-20 17:43:57.640316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.005 [2024-11-20 17:43:57.640351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.uIBRaQWr5B 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.uIBRaQWr5B 00:23:16.941 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:17.199 [2024-11-20 17:43:58.821002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.199 17:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:17.457 17:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:23:17.715 [2024-11-20 17:43:59.473188] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.715 [2024-11-20 17:43:59.474953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.715 17:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:17.973 malloc0 00:23:18.233 17:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:18.494 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:23:18.752 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=102076 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 102076 /var/tmp/bdevperf.sock 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 102076 ']' 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.010 17:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.268 [2024-11-20 17:44:00.801455] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:19.269 [2024-11-20 17:44:00.801556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102076 ] 00:23:19.269 [2024-11-20 17:44:00.940498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.527 [2024-11-20 17:44:01.075913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.463 17:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.463 17:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:20.463 17:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:23:20.721 17:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:20.980 [2024-11-20 17:44:02.622533] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.980 nvme0n1 00:23:20.980 17:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.239 Running I/O for 1 seconds... 00:23:22.176 3227.00 IOPS, 12.61 MiB/s 00:23:22.176 Latency(us) 00:23:22.176 [2024-11-20T17:44:03.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.176 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:22.176 Verification LBA range: start 0x0 length 0x2000 00:23:22.176 nvme0n1 : 1.02 3286.51 12.84 0.00 0.00 38426.95 3053.38 33363.78 00:23:22.176 [2024-11-20T17:44:03.964Z] =================================================================================================================== 00:23:22.176 [2024-11-20T17:44:03.964Z] Total : 3286.51 12.84 0.00 0.00 38426.95 3053.38 33363.78 00:23:22.176 { 00:23:22.176 "results": [ 00:23:22.176 { 00:23:22.176 "job": "nvme0n1", 00:23:22.176 "core_mask": "0x2", 00:23:22.176 "workload": "verify", 00:23:22.176 "status": "finished", 00:23:22.176 "verify_range": { 00:23:22.176 "start": 0, 00:23:22.176 "length": 8192 00:23:22.176 }, 00:23:22.176 "queue_depth": 128, 00:23:22.176 "io_size": 4096, 00:23:22.176 "runtime": 1.02084, 00:23:22.176 "iops": 3286.5091493280042, 00:23:22.176 "mibps": 12.837926364562517, 00:23:22.176 "io_failed": 0, 00:23:22.176 "io_timeout": 0, 00:23:22.176 "avg_latency_us": 38426.95280097548, 00:23:22.176 "min_latency_us": 3053.3818181818183, 00:23:22.176 "max_latency_us": 33363.781818181815 00:23:22.176 } 00:23:22.176 ], 00:23:22.176 "core_count": 1 00:23:22.176 } 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 102076 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 102076 ']' 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 102076 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102076 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.176 killing process with pid 102076 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102076' 00:23:22.176 Received shutdown signal, test time was about 1.000000 seconds 00:23:22.176 00:23:22.176 Latency(us) 00:23:22.176 [2024-11-20T17:44:03.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.176 [2024-11-20T17:44:03.964Z] =================================================================================================================== 00:23:22.176 [2024-11-20T17:44:03.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 102076 00:23:22.176 17:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 102076 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 101960 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 101960 ']' 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 101960 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101960 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101960' 00:23:22.743 killing process with pid 101960 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 101960 00:23:22.743 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 101960 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=102157 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 102157 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 102157 ']' 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.002 17:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.002 [2024-11-20 17:44:04.674465] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:23.002 [2024-11-20 17:44:04.674660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.261 [2024-11-20 17:44:04.821173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.261 [2024-11-20 17:44:04.952149] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.261 [2024-11-20 17:44:04.952232] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.261 [2024-11-20 17:44:04.952244] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.261 [2024-11-20 17:44:04.952254] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.261 [2024-11-20 17:44:04.952262] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.261 [2024-11-20 17:44:04.952299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.195 [2024-11-20 17:44:05.752541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.195 malloc0 00:23:24.195 [2024-11-20 17:44:05.787652] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.195 [2024-11-20 17:44:05.787953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=102207 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 102207 /var/tmp/bdevperf.sock 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 102207 ']' 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.195 17:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.195 [2024-11-20 17:44:05.872787] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:24.195 [2024-11-20 17:44:05.872887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102207 ] 00:23:24.453 [2024-11-20 17:44:06.012148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.453 [2024-11-20 17:44:06.151460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.387 17:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.387 17:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:25.387 17:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uIBRaQWr5B 00:23:25.644 17:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:25.902 [2024-11-20 17:44:07.642514] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.160 nvme0n1 00:23:26.160 17:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.160 Running I/O for 1 seconds... 00:23:27.093 3753.00 IOPS, 14.66 MiB/s 00:23:27.093 Latency(us) 00:23:27.093 [2024-11-20T17:44:08.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.093 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:27.093 Verification LBA range: start 0x0 length 0x2000 00:23:27.093 nvme0n1 : 1.02 3797.10 14.83 0.00 0.00 33237.68 4766.25 23712.12 00:23:27.093 [2024-11-20T17:44:08.881Z] =================================================================================================================== 00:23:27.093 [2024-11-20T17:44:08.881Z] Total : 3797.10 14.83 0.00 0.00 33237.68 4766.25 23712.12 00:23:27.093 { 00:23:27.094 "results": [ 00:23:27.094 { 00:23:27.094 "job": "nvme0n1", 00:23:27.094 "core_mask": "0x2", 00:23:27.094 "workload": "verify", 00:23:27.094 "status": "finished", 00:23:27.094 "verify_range": { 00:23:27.094 "start": 0, 00:23:27.094 "length": 8192 00:23:27.094 }, 00:23:27.094 "queue_depth": 128, 00:23:27.094 "io_size": 4096, 00:23:27.094 "runtime": 1.022096, 00:23:27.094 "iops": 3797.099293999781, 00:23:27.094 "mibps": 14.832419117186644, 00:23:27.094 "io_failed": 0, 00:23:27.094 "io_timeout": 0, 00:23:27.094 "avg_latency_us": 33237.675178374826, 00:23:27.094 "min_latency_us": 4766.254545454545, 00:23:27.094 "max_latency_us": 23712.116363636364 00:23:27.094 } 00:23:27.094 ], 00:23:27.094 "core_count": 1 00:23:27.094 } 00:23:27.353 17:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:27.353 17:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.353 17:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.353 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.353 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:27.353 "subsystems": [ 00:23:27.353 { 00:23:27.353 "subsystem": "keyring", 00:23:27.353 "config": [ 00:23:27.353 { 00:23:27.353 "method": "keyring_file_add_key", 00:23:27.353 "params": { 00:23:27.353 "name": "key0", 00:23:27.353 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:27.353 } 00:23:27.353 } 00:23:27.353 ] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "iobuf", 00:23:27.353 "config": [ 00:23:27.353 { 00:23:27.353 "method": "iobuf_set_options", 00:23:27.353 "params": { 00:23:27.353 "large_bufsize": 135168, 00:23:27.353 "large_pool_count": 1024, 00:23:27.353 "small_bufsize": 8192, 00:23:27.353 "small_pool_count": 8192 00:23:27.353 } 00:23:27.353 } 00:23:27.353 ] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "sock", 00:23:27.353 "config": [ 00:23:27.353 { 00:23:27.353 "method": "sock_set_default_impl", 00:23:27.353 "params": { 00:23:27.353 "impl_name": "posix" 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "sock_impl_set_options", 00:23:27.353 "params": { 00:23:27.353 "enable_ktls": false, 00:23:27.353 "enable_placement_id": 0, 00:23:27.353 "enable_quickack": false, 00:23:27.353 "enable_recv_pipe": true, 00:23:27.353 "enable_zerocopy_send_client": false, 00:23:27.353 "enable_zerocopy_send_server": true, 00:23:27.353 "impl_name": "ssl", 00:23:27.353 "recv_buf_size": 4096, 00:23:27.353 "send_buf_size": 4096, 00:23:27.353 "tls_version": 0, 00:23:27.353 "zerocopy_threshold": 0 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "sock_impl_set_options", 00:23:27.353 "params": { 00:23:27.353 "enable_ktls": false, 00:23:27.353 "enable_placement_id": 0, 00:23:27.353 "enable_quickack": false, 00:23:27.353 "enable_recv_pipe": true, 00:23:27.353 "enable_zerocopy_send_client": false, 00:23:27.353 "enable_zerocopy_send_server": true, 00:23:27.353 "impl_name": "posix", 00:23:27.353 "recv_buf_size": 2097152, 00:23:27.353 "send_buf_size": 2097152, 00:23:27.353 "tls_version": 0, 00:23:27.353 "zerocopy_threshold": 0 00:23:27.353 } 00:23:27.353 } 00:23:27.353 ] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "vmd", 00:23:27.353 "config": [] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "accel", 00:23:27.353 "config": [ 00:23:27.353 { 00:23:27.353 "method": "accel_set_options", 00:23:27.353 "params": { 00:23:27.353 "buf_count": 2048, 00:23:27.353 "large_cache_size": 16, 00:23:27.353 "sequence_count": 2048, 00:23:27.353 "small_cache_size": 128, 00:23:27.353 "task_count": 2048 00:23:27.353 } 00:23:27.353 } 00:23:27.353 ] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "bdev", 00:23:27.353 "config": [ 00:23:27.353 { 00:23:27.353 "method": "bdev_set_options", 00:23:27.353 "params": { 00:23:27.353 "bdev_auto_examine": true, 00:23:27.353 "bdev_io_cache_size": 256, 00:23:27.353 "bdev_io_pool_size": 65535, 00:23:27.353 "iobuf_large_cache_size": 16, 00:23:27.353 "iobuf_small_cache_size": 128 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "bdev_raid_set_options", 00:23:27.353 "params": { 00:23:27.353 "process_max_bandwidth_mb_sec": 0, 00:23:27.353 "process_window_size_kb": 1024 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "bdev_iscsi_set_options", 00:23:27.353 "params": { 00:23:27.353 "timeout_sec": 30 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "bdev_nvme_set_options", 00:23:27.353 "params": { 00:23:27.353 "action_on_timeout": "none", 00:23:27.353 "allow_accel_sequence": false, 00:23:27.353 "arbitration_burst": 0, 00:23:27.353 "bdev_retry_count": 3, 00:23:27.353 "ctrlr_loss_timeout_sec": 0, 00:23:27.353 "delay_cmd_submit": true, 00:23:27.353 "dhchap_dhgroups": [ 00:23:27.353 "null", 00:23:27.353 "ffdhe2048", 00:23:27.353 "ffdhe3072", 00:23:27.353 "ffdhe4096", 00:23:27.353 "ffdhe6144", 00:23:27.353 "ffdhe8192" 00:23:27.353 ], 00:23:27.353 "dhchap_digests": [ 00:23:27.353 "sha256", 00:23:27.353 "sha384", 00:23:27.353 "sha512" 00:23:27.353 ], 00:23:27.353 "disable_auto_failback": false, 00:23:27.353 "fast_io_fail_timeout_sec": 0, 00:23:27.353 "generate_uuids": false, 00:23:27.353 "high_priority_weight": 0, 00:23:27.353 "io_path_stat": false, 00:23:27.353 "io_queue_requests": 0, 00:23:27.353 "keep_alive_timeout_ms": 10000, 00:23:27.353 "low_priority_weight": 0, 00:23:27.353 "medium_priority_weight": 0, 00:23:27.353 "nvme_adminq_poll_period_us": 10000, 00:23:27.353 "nvme_error_stat": false, 00:23:27.353 "nvme_ioq_poll_period_us": 0, 00:23:27.353 "rdma_cm_event_timeout_ms": 0, 00:23:27.353 "rdma_max_cq_size": 0, 00:23:27.353 "rdma_srq_size": 0, 00:23:27.353 "reconnect_delay_sec": 0, 00:23:27.353 "timeout_admin_us": 0, 00:23:27.353 "timeout_us": 0, 00:23:27.353 "transport_ack_timeout": 0, 00:23:27.353 "transport_retry_count": 4, 00:23:27.353 "transport_tos": 0 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "bdev_nvme_set_hotplug", 00:23:27.353 "params": { 00:23:27.353 "enable": false, 00:23:27.353 "period_us": 100000 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "bdev_malloc_create", 00:23:27.353 "params": { 00:23:27.353 "block_size": 4096, 00:23:27.353 "dif_is_head_of_md": false, 00:23:27.353 "dif_pi_format": 0, 00:23:27.353 "dif_type": 0, 00:23:27.353 "md_size": 0, 00:23:27.353 "name": "malloc0", 00:23:27.353 "num_blocks": 8192, 00:23:27.353 "optimal_io_boundary": 0, 00:23:27.353 "physical_block_size": 4096, 00:23:27.353 "uuid": "9016a3b9-0af6-49d3-b305-ff96143b0875" 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "bdev_wait_for_examine" 00:23:27.353 } 00:23:27.353 ] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "nbd", 00:23:27.353 "config": [] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "scheduler", 00:23:27.353 "config": [ 00:23:27.353 { 00:23:27.353 "method": "framework_set_scheduler", 00:23:27.353 "params": { 00:23:27.353 "name": "static" 00:23:27.353 } 00:23:27.353 } 00:23:27.353 ] 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "subsystem": "nvmf", 00:23:27.353 "config": [ 00:23:27.353 { 00:23:27.353 "method": "nvmf_set_config", 00:23:27.353 "params": { 00:23:27.353 "admin_cmd_passthru": { 00:23:27.353 "identify_ctrlr": false 00:23:27.353 }, 00:23:27.353 "dhchap_dhgroups": [ 00:23:27.353 "null", 00:23:27.353 "ffdhe2048", 00:23:27.353 "ffdhe3072", 00:23:27.353 "ffdhe4096", 00:23:27.353 "ffdhe6144", 00:23:27.353 "ffdhe8192" 00:23:27.353 ], 00:23:27.353 "dhchap_digests": [ 00:23:27.353 "sha256", 00:23:27.353 "sha384", 00:23:27.353 "sha512" 00:23:27.353 ], 00:23:27.353 "discovery_filter": "match_any" 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "nvmf_set_max_subsystems", 00:23:27.353 "params": { 00:23:27.353 "max_subsystems": 1024 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "nvmf_set_crdt", 00:23:27.353 "params": { 00:23:27.353 "crdt1": 0, 00:23:27.353 "crdt2": 0, 00:23:27.353 "crdt3": 0 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "nvmf_create_transport", 00:23:27.353 "params": { 00:23:27.353 "abort_timeout_sec": 1, 00:23:27.353 "ack_timeout": 0, 00:23:27.353 "buf_cache_size": 4294967295, 00:23:27.353 "c2h_success": false, 00:23:27.353 "data_wr_pool_size": 0, 00:23:27.353 "dif_insert_or_strip": false, 00:23:27.353 "in_capsule_data_size": 4096, 00:23:27.353 "io_unit_size": 131072, 00:23:27.353 "max_aq_depth": 128, 00:23:27.353 "max_io_qpairs_per_ctrlr": 127, 00:23:27.353 "max_io_size": 131072, 00:23:27.353 "max_queue_depth": 128, 00:23:27.353 "num_shared_buffers": 511, 00:23:27.353 "sock_priority": 0, 00:23:27.353 "trtype": "TCP", 00:23:27.353 "zcopy": false 00:23:27.353 } 00:23:27.353 }, 00:23:27.353 { 00:23:27.353 "method": "nvmf_create_subsystem", 00:23:27.353 "params": { 00:23:27.353 "allow_any_host": false, 00:23:27.353 "ana_reporting": false, 00:23:27.353 "max_cntlid": 65519, 00:23:27.353 "max_namespaces": 32, 00:23:27.354 "min_cntlid": 1, 00:23:27.354 "model_number": "SPDK bdev Controller", 00:23:27.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.354 "serial_number": "00000000000000000000" 00:23:27.354 } 00:23:27.354 }, 00:23:27.354 { 00:23:27.354 "method": "nvmf_subsystem_add_host", 00:23:27.354 "params": { 00:23:27.354 "host": "nqn.2016-06.io.spdk:host1", 00:23:27.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.354 "psk": "key0" 00:23:27.354 } 00:23:27.354 }, 00:23:27.354 { 00:23:27.354 "method": "nvmf_subsystem_add_ns", 00:23:27.354 "params": { 00:23:27.354 "namespace": { 00:23:27.354 "bdev_name": "malloc0", 00:23:27.354 "nguid": "9016A3B90AF649D3B305FF96143B0875", 00:23:27.354 "no_auto_visible": false, 00:23:27.354 "nsid": 1, 00:23:27.354 "uuid": "9016a3b9-0af6-49d3-b305-ff96143b0875" 00:23:27.354 }, 00:23:27.354 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:27.354 } 00:23:27.354 }, 00:23:27.354 { 00:23:27.354 "method": "nvmf_subsystem_add_listener", 00:23:27.354 "params": { 00:23:27.354 "listen_address": { 00:23:27.354 "adrfam": "IPv4", 00:23:27.354 "traddr": "10.0.0.3", 00:23:27.354 "trsvcid": "4420", 00:23:27.354 "trtype": "TCP" 00:23:27.354 }, 00:23:27.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.354 "secure_channel": false, 00:23:27.354 "sock_impl": "ssl" 00:23:27.354 } 00:23:27.354 } 00:23:27.354 ] 00:23:27.354 } 00:23:27.354 ] 00:23:27.354 }' 00:23:27.354 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:27.920 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:27.920 "subsystems": [ 00:23:27.920 { 00:23:27.920 "subsystem": "keyring", 00:23:27.920 "config": [ 00:23:27.920 { 00:23:27.920 "method": "keyring_file_add_key", 00:23:27.920 "params": { 00:23:27.920 "name": "key0", 00:23:27.920 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:27.920 } 00:23:27.920 } 00:23:27.920 ] 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "subsystem": "iobuf", 00:23:27.920 "config": [ 00:23:27.920 { 00:23:27.920 "method": "iobuf_set_options", 00:23:27.920 "params": { 00:23:27.920 "large_bufsize": 135168, 00:23:27.920 "large_pool_count": 1024, 00:23:27.920 "small_bufsize": 8192, 00:23:27.920 "small_pool_count": 8192 00:23:27.920 } 00:23:27.920 } 00:23:27.920 ] 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "subsystem": "sock", 00:23:27.920 "config": [ 00:23:27.920 { 00:23:27.920 "method": "sock_set_default_impl", 00:23:27.920 "params": { 00:23:27.920 "impl_name": "posix" 00:23:27.920 } 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "method": "sock_impl_set_options", 00:23:27.920 "params": { 00:23:27.920 "enable_ktls": false, 00:23:27.920 "enable_placement_id": 0, 00:23:27.920 "enable_quickack": false, 00:23:27.920 "enable_recv_pipe": true, 00:23:27.920 "enable_zerocopy_send_client": false, 00:23:27.920 "enable_zerocopy_send_server": true, 00:23:27.920 "impl_name": "ssl", 00:23:27.920 "recv_buf_size": 4096, 00:23:27.920 "send_buf_size": 4096, 00:23:27.920 "tls_version": 0, 00:23:27.920 "zerocopy_threshold": 0 00:23:27.920 } 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "method": "sock_impl_set_options", 00:23:27.920 "params": { 00:23:27.920 "enable_ktls": false, 00:23:27.920 "enable_placement_id": 0, 00:23:27.920 "enable_quickack": false, 00:23:27.920 "enable_recv_pipe": true, 00:23:27.920 "enable_zerocopy_send_client": false, 00:23:27.920 "enable_zerocopy_send_server": true, 00:23:27.920 "impl_name": "posix", 00:23:27.920 "recv_buf_size": 2097152, 00:23:27.920 "send_buf_size": 2097152, 00:23:27.920 "tls_version": 0, 00:23:27.920 "zerocopy_threshold": 0 00:23:27.920 } 00:23:27.920 } 00:23:27.920 ] 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "subsystem": "vmd", 00:23:27.920 "config": [] 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "subsystem": "accel", 00:23:27.920 "config": [ 00:23:27.920 { 00:23:27.920 "method": "accel_set_options", 00:23:27.920 "params": { 00:23:27.920 "buf_count": 2048, 00:23:27.920 "large_cache_size": 16, 00:23:27.920 "sequence_count": 2048, 00:23:27.920 "small_cache_size": 128, 00:23:27.920 "task_count": 2048 00:23:27.920 } 00:23:27.920 } 00:23:27.920 ] 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "subsystem": "bdev", 00:23:27.920 "config": [ 00:23:27.920 { 00:23:27.920 "method": "bdev_set_options", 00:23:27.920 "params": { 00:23:27.920 "bdev_auto_examine": true, 00:23:27.920 "bdev_io_cache_size": 256, 00:23:27.920 "bdev_io_pool_size": 65535, 00:23:27.920 "iobuf_large_cache_size": 16, 00:23:27.920 "iobuf_small_cache_size": 128 00:23:27.920 } 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "method": "bdev_raid_set_options", 00:23:27.920 "params": { 00:23:27.920 "process_max_bandwidth_mb_sec": 0, 00:23:27.920 "process_window_size_kb": 1024 00:23:27.920 } 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "method": "bdev_iscsi_set_options", 00:23:27.920 "params": { 00:23:27.920 "timeout_sec": 30 00:23:27.920 } 00:23:27.920 }, 00:23:27.920 { 00:23:27.920 "method": "bdev_nvme_set_options", 00:23:27.920 "params": { 00:23:27.920 "action_on_timeout": "none", 00:23:27.920 "allow_accel_sequence": false, 00:23:27.920 "arbitration_burst": 0, 00:23:27.920 "bdev_retry_count": 3, 00:23:27.920 "ctrlr_loss_timeout_sec": 0, 00:23:27.920 "delay_cmd_submit": true, 00:23:27.920 "dhchap_dhgroups": [ 00:23:27.920 "null", 00:23:27.920 "ffdhe2048", 00:23:27.920 "ffdhe3072", 00:23:27.920 "ffdhe4096", 00:23:27.920 "ffdhe6144", 00:23:27.920 "ffdhe8192" 00:23:27.920 ], 00:23:27.920 "dhchap_digests": [ 00:23:27.920 "sha256", 00:23:27.920 "sha384", 00:23:27.920 "sha512" 00:23:27.920 ], 00:23:27.920 "disable_auto_failback": false, 00:23:27.920 "fast_io_fail_timeout_sec": 0, 00:23:27.920 "generate_uuids": false, 00:23:27.920 "high_priority_weight": 0, 00:23:27.920 "io_path_stat": false, 00:23:27.920 "io_queue_requests": 512, 00:23:27.920 "keep_alive_timeout_ms": 10000, 00:23:27.920 "low_priority_weight": 0, 00:23:27.920 "medium_priority_weight": 0, 00:23:27.920 "nvme_adminq_poll_period_us": 10000, 00:23:27.920 "nvme_error_stat": false, 00:23:27.920 "nvme_ioq_poll_period_us": 0, 00:23:27.920 "rdma_cm_event_timeout_ms": 0, 00:23:27.920 "rdma_max_cq_size": 0, 00:23:27.920 "rdma_srq_size": 0, 00:23:27.920 "reconnect_delay_sec": 0, 00:23:27.920 "timeout_admin_us": 0, 00:23:27.921 "timeout_us": 0, 00:23:27.921 "transport_ack_timeout": 0, 00:23:27.921 "transport_retry_count": 4, 00:23:27.921 "transport_tos": 0 00:23:27.921 } 00:23:27.921 }, 00:23:27.921 { 00:23:27.921 "method": "bdev_nvme_attach_controller", 00:23:27.921 "params": { 00:23:27.921 "adrfam": "IPv4", 00:23:27.921 "ctrlr_loss_timeout_sec": 0, 00:23:27.921 "ddgst": false, 00:23:27.921 "fast_io_fail_timeout_sec": 0, 00:23:27.921 "hdgst": false, 00:23:27.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.921 "name": "nvme0", 00:23:27.921 "prchk_guard": false, 00:23:27.921 "prchk_reftag": false, 00:23:27.921 "psk": "key0", 00:23:27.921 "reconnect_delay_sec": 0, 00:23:27.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.921 "traddr": "10.0.0.3", 00:23:27.921 "trsvcid": "4420", 00:23:27.921 "trtype": "TCP" 00:23:27.921 } 00:23:27.921 }, 00:23:27.921 { 00:23:27.921 "method": "bdev_nvme_set_hotplug", 00:23:27.921 "params": { 00:23:27.921 "enable": false, 00:23:27.921 "period_us": 100000 00:23:27.921 } 00:23:27.921 }, 00:23:27.921 { 00:23:27.921 "method": "bdev_enable_histogram", 00:23:27.921 "params": { 00:23:27.921 "enable": true, 00:23:27.921 "name": "nvme0n1" 00:23:27.921 } 00:23:27.921 }, 00:23:27.921 { 00:23:27.921 "method": "bdev_wait_for_examine" 00:23:27.921 } 00:23:27.921 ] 00:23:27.921 }, 00:23:27.921 { 00:23:27.921 "subsystem": "nbd", 00:23:27.921 "config": [] 00:23:27.921 } 00:23:27.921 ] 00:23:27.921 }' 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 102207 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 102207 ']' 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 102207 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102207 00:23:27.921 killing process with pid 102207 00:23:27.921 Received shutdown signal, test time was about 1.000000 seconds 00:23:27.921 00:23:27.921 Latency(us) 00:23:27.921 [2024-11-20T17:44:09.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.921 [2024-11-20T17:44:09.709Z] =================================================================================================================== 00:23:27.921 [2024-11-20T17:44:09.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102207' 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 102207 00:23:27.921 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 102207 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 102157 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 102157 ']' 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 102157 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102157 00:23:28.180 killing process with pid 102157 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102157' 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 102157 00:23:28.180 17:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 102157 00:23:28.438 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:28.438 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:28.438 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.438 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:28.438 "subsystems": [ 00:23:28.438 { 00:23:28.438 "subsystem": "keyring", 00:23:28.438 "config": [ 00:23:28.438 { 00:23:28.438 "method": "keyring_file_add_key", 00:23:28.438 "params": { 00:23:28.438 "name": "key0", 00:23:28.438 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:28.438 } 00:23:28.438 } 00:23:28.438 ] 00:23:28.438 }, 00:23:28.438 { 00:23:28.438 "subsystem": "iobuf", 00:23:28.438 "config": [ 00:23:28.438 { 00:23:28.438 "method": "iobuf_set_options", 00:23:28.438 "params": { 00:23:28.438 "large_bufsize": 135168, 00:23:28.438 "large_pool_count": 1024, 00:23:28.438 "small_bufsize": 8192, 00:23:28.438 "small_pool_count": 8192 00:23:28.438 } 00:23:28.438 } 00:23:28.438 ] 00:23:28.438 }, 00:23:28.438 { 00:23:28.438 "subsystem": "sock", 00:23:28.438 "config": [ 00:23:28.438 { 00:23:28.438 "method": "sock_set_default_impl", 00:23:28.438 "params": { 00:23:28.438 "impl_name": "posix" 00:23:28.438 } 00:23:28.438 }, 00:23:28.438 { 00:23:28.438 "method": "sock_impl_set_options", 00:23:28.438 "params": { 00:23:28.438 "enable_ktls": false, 00:23:28.438 "enable_placement_id": 0, 00:23:28.438 "enable_quickack": false, 00:23:28.438 "enable_recv_pipe": true, 00:23:28.438 "enable_zerocopy_send_client": false, 00:23:28.438 "enable_zerocopy_send_server": true, 00:23:28.438 "impl_name": "ssl", 00:23:28.438 "recv_buf_size": 4096, 00:23:28.438 "send_buf_size": 4096, 00:23:28.438 "tls_version": 0, 00:23:28.438 "zerocopy_threshold": 0 00:23:28.438 } 00:23:28.438 }, 00:23:28.438 { 00:23:28.438 "method": "sock_impl_set_options", 00:23:28.438 "params": { 00:23:28.439 "enable_ktls": false, 00:23:28.439 "enable_placement_id": 0, 00:23:28.439 "enable_quickack": false, 00:23:28.439 "enable_recv_pipe": true, 00:23:28.439 "enable_zerocopy_send_client": false, 00:23:28.439 "enable_zerocopy_send_server": true, 00:23:28.439 "impl_name": "posix", 00:23:28.439 "recv_buf_size": 2097152, 00:23:28.439 "send_buf_size": 2097152, 00:23:28.439 "tls_version": 0, 00:23:28.439 "zerocopy_threshold": 0 00:23:28.439 } 00:23:28.439 } 00:23:28.439 ] 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "subsystem": "vmd", 00:23:28.439 "config": [] 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "subsystem": "accel", 00:23:28.439 "config": [ 00:23:28.439 { 00:23:28.439 "method": "accel_set_options", 00:23:28.439 "params": { 00:23:28.439 "buf_count": 2048, 00:23:28.439 "large_cache_size": 16, 00:23:28.439 "sequence_count": 2048, 00:23:28.439 "small_cache_size": 128, 00:23:28.439 "task_count": 2048 00:23:28.439 } 00:23:28.439 } 00:23:28.439 ] 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "subsystem": "bdev", 00:23:28.439 "config": [ 00:23:28.439 { 00:23:28.439 "method": "bdev_set_options", 00:23:28.439 "params": { 00:23:28.439 "bdev_auto_examine": true, 00:23:28.439 "bdev_io_cache_size": 256, 00:23:28.439 "bdev_io_pool_size": 65535, 00:23:28.439 "iobuf_large_cache_size": 16, 00:23:28.439 "iobuf_small_cache_size": 128 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "method": "bdev_raid_set_options", 00:23:28.439 "params": { 00:23:28.439 "process_max_bandwidth_mb_sec": 0, 00:23:28.439 "process_window_size_kb": 1024 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "method": "bdev_iscsi_set_options", 00:23:28.439 "params": { 00:23:28.439 "timeout_sec": 30 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "method": "bdev_nvme_set_options", 00:23:28.439 "params": { 00:23:28.439 "action_on_timeout": "none", 00:23:28.439 "allow_accel_sequence": false, 00:23:28.439 "arbitration_burst": 0, 00:23:28.439 "bdev_retry_count": 3, 00:23:28.439 "ctrlr_loss_timeout_sec": 0, 00:23:28.439 "delay_cmd_submit": true, 00:23:28.439 "dhchap_dhgroups": [ 00:23:28.439 "null", 00:23:28.439 "ffdhe2048", 00:23:28.439 "ffdhe3072", 00:23:28.439 "ffdhe4096", 00:23:28.439 "ffdhe6144", 00:23:28.439 "ffdhe8192" 00:23:28.439 ], 00:23:28.439 "dhchap_digests": [ 00:23:28.439 "sha256", 00:23:28.439 "sha384", 00:23:28.439 "sha512" 00:23:28.439 ], 00:23:28.439 "disable_auto_failback": false, 00:23:28.439 "fast_io_fail_timeout_sec": 0, 00:23:28.439 "generate_uuids": false, 00:23:28.439 "high_priority_weight": 0, 00:23:28.439 "io_path_stat": false, 00:23:28.439 "io_queue_requests": 0, 00:23:28.439 "keep_alive_timeout_ms": 10000, 00:23:28.439 "low_priority_weight": 0, 00:23:28.439 "medium_priority_weight": 0, 00:23:28.439 "nvme_adminq_poll_period_us": 10000, 00:23:28.439 "nvme_error_stat": false, 00:23:28.439 "nvme_ioq_poll_period_us": 0, 00:23:28.439 "rdma_cm_event_timeout_ms": 0, 00:23:28.439 "rdma_max_cq_size": 0, 00:23:28.439 "rdma_srq_size": 0, 00:23:28.439 "reconnect_delay_sec": 0, 00:23:28.439 "timeout_admin_us": 0, 00:23:28.439 "timeout_us": 0, 00:23:28.439 "transport_ack_timeout": 0, 00:23:28.439 "transport_retry_count": 4, 00:23:28.439 "transport_tos": 0 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "method": "bdev_nvme_set_hotplug", 00:23:28.439 "params": { 00:23:28.439 "enable": false, 00:23:28.439 "period_us": 100000 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "method": "bdev_malloc_create", 00:23:28.439 "params": { 00:23:28.439 "block_size": 4096, 00:23:28.439 "dif_is_head_of_md": false, 00:23:28.439 "dif_pi_format": 0, 00:23:28.439 "dif_type": 0, 00:23:28.439 "md_size": 0, 00:23:28.439 "name": "malloc0", 00:23:28.439 "num_blocks": 8192, 00:23:28.439 "optimal_io_boundary": 0, 00:23:28.439 "physical_block_size": 4096, 00:23:28.439 "uuid": "9016a3b9-0af6-49d3-b305-ff96143b0875" 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "method": "bdev_wait_for_examine" 00:23:28.439 } 00:23:28.439 ] 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "subsystem": "nbd", 00:23:28.439 "config": [] 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "subsystem": "scheduler", 00:23:28.439 "config": [ 00:23:28.439 { 00:23:28.439 "method": "framework_set_scheduler", 00:23:28.439 "params": { 00:23:28.439 "name": "static" 00:23:28.439 } 00:23:28.439 } 00:23:28.439 ] 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "subsystem": "nvmf", 00:23:28.439 "config": [ 00:23:28.439 { 00:23:28.439 "method": "nvmf_set_config", 00:23:28.439 "params": { 00:23:28.439 "admin_cmd_passthru": { 00:23:28.439 "identify_ctrlr": false 00:23:28.439 }, 00:23:28.439 "dhchap_dhgroups": [ 00:23:28.439 "null", 00:23:28.439 "ffdhe2048", 00:23:28.439 "ffdhe3072", 00:23:28.439 "ffdhe4096", 00:23:28.439 "ffdhe6144", 00:23:28.439 "ffdhe8192" 00:23:28.439 ], 00:23:28.439 "dhchap_digests": [ 00:23:28.439 "sha256", 00:23:28.439 "sha384", 00:23:28.439 "sha512" 00:23:28.439 ], 00:23:28.439 "discovery_filter": "match_any" 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.439 "method": "nvmf_set_max_subsystems", 00:23:28.439 "params": { 00:23:28.439 "max_subsystems": 1024 00:23:28.439 } 00:23:28.439 }, 00:23:28.439 { 00:23:28.440 "method": "nvmf_set_crdt", 00:23:28.440 "params": { 00:23:28.440 "crdt1": 0, 00:23:28.440 "crdt2": 0, 00:23:28.440 "crdt3": 0 00:23:28.440 } 00:23:28.440 }, 00:23:28.440 { 00:23:28.440 "method": "nvmf_create_transport", 00:23:28.440 "params": { 00:23:28.440 "abort_timeout_sec": 1, 00:23:28.440 "ack_timeout": 0, 00:23:28.440 "buf_cache_size": 4294967295, 00:23:28.440 "c2h_success": false, 00:23:28.440 "data_wr_pool_size": 0, 00:23:28.440 "dif_insert_or_strip": false, 00:23:28.440 "in_capsule_data_size": 4096, 00:23:28.440 "io_unit_size": 131072, 00:23:28.440 "max_aq_depth": 128, 00:23:28.440 "max_io_qpairs_per_ctrlr": 127, 00:23:28.440 "max_io_size": 131072, 00:23:28.440 "max_queue_depth": 128, 00:23:28.440 "num_shared_buffers": 511, 00:23:28.440 "sock_priority": 0, 00:23:28.440 "trtype": "TCP", 00:23:28.440 "zcopy": false 00:23:28.440 } 00:23:28.440 }, 00:23:28.440 { 00:23:28.440 "method": "nvmf_create_subsystem", 00:23:28.440 "params": { 00:23:28.440 "allow_any_host": false, 00:23:28.440 "ana_reporting": false, 00:23:28.440 "max_cntlid": 65519, 00:23:28.440 "max_namespaces": 32, 00:23:28.440 "min_cntlid": 1, 00:23:28.440 "model_number": "SPDK bdev Controller", 00:23:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.440 "serial_number": "00000000000000000000" 00:23:28.440 } 00:23:28.440 }, 00:23:28.440 { 00:23:28.440 "method": "nvmf_subsystem_add_host", 00:23:28.440 "params": { 00:23:28.440 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.440 "psk": "key0" 00:23:28.440 } 00:23:28.440 }, 00:23:28.440 { 00:23:28.440 "method": "nvmf_subsystem_add_ns", 00:23:28.440 "params": { 00:23:28.440 "namespace": { 00:23:28.440 "bdev_name": "malloc0", 00:23:28.440 "nguid": "9016A3B90AF649D3B305FF96143B0875", 00:23:28.440 "no_auto_visible": false, 00:23:28.440 "nsid": 1, 00:23:28.440 "uuid": "9016a3b9-0af6-49d3-b305-ff96143b0875" 00:23:28.440 }, 00:23:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:28.440 } 00:23:28.440 }, 00:23:28.440 { 00:23:28.440 "method": "nvmf_subsystem_add_listener", 00:23:28.440 "params": { 00:23:28.440 "listen_address": { 00:23:28.440 "adrfam": "IPv4", 00:23:28.440 "traddr": "10.0.0.3", 00:23:28.440 "trsvcid": "4420", 00:23:28.440 "trtype": "TCP" 00:23:28.440 }, 00:23:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.440 "secure_channel": false, 00:23:28.440 "sock_impl": "ssl" 00:23:28.440 } 00:23:28.440 } 00:23:28.440 ] 00:23:28.440 } 00:23:28.440 ] 00:23:28.440 }' 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=102298 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 102298 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 102298 ']' 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.440 17:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.755 [2024-11-20 17:44:10.236107] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:28.755 [2024-11-20 17:44:10.236245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.755 [2024-11-20 17:44:10.381246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.755 [2024-11-20 17:44:10.511146] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.755 [2024-11-20 17:44:10.511213] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.755 [2024-11-20 17:44:10.511225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.755 [2024-11-20 17:44:10.511234] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.755 [2024-11-20 17:44:10.511242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.755 [2024-11-20 17:44:10.511345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.013 [2024-11-20 17:44:10.798287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.270 [2024-11-20 17:44:10.836078] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.270 [2024-11-20 17:44:10.836467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:29.529 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.529 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:29.529 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:29.529 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.529 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=102342 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 102342 /var/tmp/bdevperf.sock 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 102342 ']' 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:29.787 17:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:29.787 "subsystems": [ 00:23:29.787 { 00:23:29.787 "subsystem": "keyring", 00:23:29.787 "config": [ 00:23:29.787 { 00:23:29.787 "method": "keyring_file_add_key", 00:23:29.787 "params": { 00:23:29.787 "name": "key0", 00:23:29.787 "path": "/tmp/tmp.uIBRaQWr5B" 00:23:29.787 } 00:23:29.787 } 00:23:29.787 ] 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "subsystem": "iobuf", 00:23:29.787 "config": [ 00:23:29.787 { 00:23:29.787 "method": "iobuf_set_options", 00:23:29.787 "params": { 00:23:29.787 "large_bufsize": 135168, 00:23:29.787 "large_pool_count": 1024, 00:23:29.787 "small_bufsize": 8192, 00:23:29.787 "small_pool_count": 8192 00:23:29.787 } 00:23:29.787 } 00:23:29.787 ] 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "subsystem": "sock", 00:23:29.787 "config": [ 00:23:29.787 { 00:23:29.787 "method": "sock_set_default_impl", 00:23:29.787 "params": { 00:23:29.787 "impl_name": "posix" 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "sock_impl_set_options", 00:23:29.787 "params": { 00:23:29.787 "enable_ktls": false, 00:23:29.787 "enable_placement_id": 0, 00:23:29.787 "enable_quickack": false, 00:23:29.787 "enable_recv_pipe": true, 00:23:29.787 "enable_zerocopy_send_client": false, 00:23:29.787 "enable_zerocopy_send_server": true, 00:23:29.787 "impl_name": "ssl", 00:23:29.787 "recv_buf_size": 4096, 00:23:29.787 "send_buf_size": 4096, 00:23:29.787 "tls_version": 0, 00:23:29.787 "zerocopy_threshold": 0 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "sock_impl_set_options", 00:23:29.787 "params": { 00:23:29.787 "enable_ktls": false, 00:23:29.787 "enable_placement_id": 0, 00:23:29.787 "enable_quickack": false, 00:23:29.787 "enable_recv_pipe": true, 00:23:29.787 "enable_zerocopy_send_client": false, 00:23:29.787 "enable_zerocopy_send_server": true, 00:23:29.787 "impl_name": "posix", 00:23:29.787 "recv_buf_size": 2097152, 00:23:29.787 "send_buf_size": 2097152, 00:23:29.787 "tls_version": 0, 00:23:29.787 "zerocopy_threshold": 0 00:23:29.787 } 00:23:29.787 } 00:23:29.787 ] 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "subsystem": "vmd", 00:23:29.787 "config": [] 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "subsystem": "accel", 00:23:29.787 "config": [ 00:23:29.787 { 00:23:29.787 "method": "accel_set_options", 00:23:29.787 "params": { 00:23:29.787 "buf_count": 2048, 00:23:29.787 "large_cache_size": 16, 00:23:29.787 "sequence_count": 2048, 00:23:29.787 "small_cache_size": 128, 00:23:29.787 "task_count": 2048 00:23:29.787 } 00:23:29.787 } 00:23:29.787 ] 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "subsystem": "bdev", 00:23:29.787 "config": [ 00:23:29.787 { 00:23:29.787 "method": "bdev_set_options", 00:23:29.787 "params": { 00:23:29.787 "bdev_auto_examine": true, 00:23:29.787 "bdev_io_cache_size": 256, 00:23:29.787 "bdev_io_pool_size": 65535, 00:23:29.787 "iobuf_large_cache_size": 16, 00:23:29.787 "iobuf_small_cache_size": 128 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "bdev_raid_set_options", 00:23:29.787 "params": { 00:23:29.787 "process_max_bandwidth_mb_sec": 0, 00:23:29.787 "process_window_size_kb": 1024 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "bdev_iscsi_set_options", 00:23:29.787 "params": { 00:23:29.787 "timeout_sec": 30 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "bdev_nvme_set_options", 00:23:29.787 "params": { 00:23:29.787 "action_on_timeout": "none", 00:23:29.787 "allow_accel_sequence": false, 00:23:29.787 "arbitration_burst": 0, 00:23:29.787 "bdev_retry_count": 3, 00:23:29.787 "ctrlr_loss_timeout_sec": 0, 00:23:29.787 "delay_cmd_submit": true, 00:23:29.787 "dhchap_dhgroups": [ 00:23:29.787 "null", 00:23:29.787 "ffdhe2048", 00:23:29.787 "ffdhe3072", 00:23:29.787 "ffdhe4096", 00:23:29.787 "ffdhe6144", 00:23:29.787 "ffdhe8192" 00:23:29.787 ], 00:23:29.787 "dhchap_digests": [ 00:23:29.787 "sha256", 00:23:29.787 "sha384", 00:23:29.787 "sha512" 00:23:29.787 ], 00:23:29.787 "disable_auto_failback": false, 00:23:29.787 "fast_io_fail_timeout_sec": 0, 00:23:29.787 "generate_uuids": false, 00:23:29.787 "high_priority_weight": 0, 00:23:29.787 "io_path_stat": false, 00:23:29.787 "io_queue_requests": 512, 00:23:29.787 "keep_alive_timeout_ms": 10000, 00:23:29.787 "low_priority_weight": 0, 00:23:29.787 "medium_priority_weight": 0, 00:23:29.787 "nvme_adminq_poll_period_us": 10000, 00:23:29.787 "nvme_error_stat": false, 00:23:29.787 "nvme_ioq_poll_period_us": 0, 00:23:29.787 "rdma_cm_event_timeout_ms": 0, 00:23:29.787 "rdma_max_cq_size": 0, 00:23:29.787 "rdma_srq_size": 0, 00:23:29.787 "reconnect_delay_sec": 0, 00:23:29.787 "timeout_admin_us": 0, 00:23:29.787 "timeout_us": 0, 00:23:29.787 "transport_ack_timeout": 0, 00:23:29.787 "transport_retry_count": 4, 00:23:29.787 "transport_tos": 0 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "bdev_nvme_attach_controller", 00:23:29.787 "params": { 00:23:29.787 "adrfam": "IPv4", 00:23:29.787 "ctrlr_loss_timeout_sec": 0, 00:23:29.787 "ddgst": false, 00:23:29.787 "fast_io_fail_timeout_sec": 0, 00:23:29.787 "hdgst": false, 00:23:29.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.787 "name": "nvme0", 00:23:29.787 "prchk_guard": false, 00:23:29.787 "prchk_reftag": false, 00:23:29.787 "psk": "key0", 00:23:29.787 "reconnect_delay_sec": 0, 00:23:29.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.787 "traddr": "10.0.0.3", 00:23:29.787 "trsvcid": "4420", 00:23:29.787 "trtype": "TCP" 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "bdev_nvme_set_hotplug", 00:23:29.787 "params": { 00:23:29.787 "enable": false, 00:23:29.787 "period_us": 100000 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "bdev_enable_histogram", 00:23:29.787 "params": { 00:23:29.787 "enable": true, 00:23:29.787 "name": "nvme0n1" 00:23:29.787 } 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "method": "bdev_wait_for_examine" 00:23:29.787 } 00:23:29.787 ] 00:23:29.787 }, 00:23:29.787 { 00:23:29.787 "subsystem": "nbd", 00:23:29.787 "config": [] 00:23:29.787 } 00:23:29.787 ] 00:23:29.787 }' 00:23:29.787 [2024-11-20 17:44:11.412597] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:29.787 [2024-11-20 17:44:11.412735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102342 ] 00:23:29.787 [2024-11-20 17:44:11.557073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.045 [2024-11-20 17:44:11.686987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.304 [2024-11-20 17:44:11.897972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.871 17:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.872 17:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:30.872 17:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:30.872 17:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:31.130 17:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.130 17:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.130 Running I/O for 1 seconds... 00:23:32.504 3845.00 IOPS, 15.02 MiB/s 00:23:32.504 Latency(us) 00:23:32.504 [2024-11-20T17:44:14.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.504 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.504 Verification LBA range: start 0x0 length 0x2000 00:23:32.504 nvme0n1 : 1.02 3907.40 15.26 0.00 0.00 32403.18 580.89 19899.11 00:23:32.504 [2024-11-20T17:44:14.292Z] =================================================================================================================== 00:23:32.504 [2024-11-20T17:44:14.292Z] Total : 3907.40 15.26 0.00 0.00 32403.18 580.89 19899.11 00:23:32.504 { 00:23:32.504 "results": [ 00:23:32.504 { 00:23:32.504 "job": "nvme0n1", 00:23:32.504 "core_mask": "0x2", 00:23:32.504 "workload": "verify", 00:23:32.504 "status": "finished", 00:23:32.504 "verify_range": { 00:23:32.504 "start": 0, 00:23:32.504 "length": 8192 00:23:32.504 }, 00:23:32.504 "queue_depth": 128, 00:23:32.504 "io_size": 4096, 00:23:32.504 "runtime": 1.016788, 00:23:32.504 "iops": 3907.402526387015, 00:23:32.504 "mibps": 15.263291118699277, 00:23:32.504 "io_failed": 0, 00:23:32.504 "io_timeout": 0, 00:23:32.504 "avg_latency_us": 32403.177828524353, 00:23:32.504 "min_latency_us": 580.8872727272727, 00:23:32.504 "max_latency_us": 19899.112727272728 00:23:32.504 } 00:23:32.504 ], 00:23:32.504 "core_count": 1 00:23:32.504 } 00:23:32.504 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:32.504 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:32.504 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:32.504 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:32.504 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:32.505 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:32.505 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:32.505 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:32.505 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:32.505 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:32.505 17:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:32.505 nvmf_trace.0 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 102342 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 102342 ']' 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 102342 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102342 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:32.505 killing process with pid 102342 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102342' 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 102342 00:23:32.505 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.505 00:23:32.505 Latency(us) 00:23:32.505 [2024-11-20T17:44:14.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.505 [2024-11-20T17:44:14.293Z] =================================================================================================================== 00:23:32.505 [2024-11-20T17:44:14.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.505 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 102342 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.763 rmmod nvme_tcp 00:23:32.763 rmmod nvme_fabrics 00:23:32.763 rmmod nvme_keyring 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 102298 ']' 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 102298 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 102298 ']' 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 102298 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102298 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:32.763 killing process with pid 102298 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102298' 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 102298 00:23:32.763 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 102298 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:33.330 17:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Al6YzwUzCj /tmp/tmp.aaCGg2vmU0 /tmp/tmp.uIBRaQWr5B 00:23:33.330 00:23:33.330 real 1m39.132s 00:23:33.330 user 2m44.691s 00:23:33.330 sys 0m30.397s 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:33.330 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.330 ************************************ 00:23:33.330 END TEST nvmf_tls 00:23:33.330 ************************************ 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.589 ************************************ 00:23:33.589 START TEST nvmf_fips 00:23:33.589 ************************************ 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:33.589 * Looking for test storage... 00:23:33.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.589 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:33.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.590 --rc genhtml_branch_coverage=1 00:23:33.590 --rc genhtml_function_coverage=1 00:23:33.590 --rc genhtml_legend=1 00:23:33.590 --rc geninfo_all_blocks=1 00:23:33.590 --rc geninfo_unexecuted_blocks=1 00:23:33.590 00:23:33.590 ' 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:33.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.590 --rc genhtml_branch_coverage=1 00:23:33.590 --rc genhtml_function_coverage=1 00:23:33.590 --rc genhtml_legend=1 00:23:33.590 --rc geninfo_all_blocks=1 00:23:33.590 --rc geninfo_unexecuted_blocks=1 00:23:33.590 00:23:33.590 ' 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:33.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.590 --rc genhtml_branch_coverage=1 00:23:33.590 --rc genhtml_function_coverage=1 00:23:33.590 --rc genhtml_legend=1 00:23:33.590 --rc geninfo_all_blocks=1 00:23:33.590 --rc geninfo_unexecuted_blocks=1 00:23:33.590 00:23:33.590 ' 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:33.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.590 --rc genhtml_branch_coverage=1 00:23:33.590 --rc genhtml_function_coverage=1 00:23:33.590 --rc genhtml_legend=1 00:23:33.590 --rc geninfo_all_blocks=1 00:23:33.590 --rc geninfo_unexecuted_blocks=1 00:23:33.590 00:23:33.590 ' 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.590 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:33.850 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:33.851 Error setting digest 00:23:33.851 4092E3F8A37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:33.851 4092E3F8A37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:33.851 Cannot find device "nvmf_init_br" 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:33.851 Cannot find device "nvmf_init_br2" 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:33.851 Cannot find device "nvmf_tgt_br" 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.851 Cannot find device "nvmf_tgt_br2" 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:23:33.851 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:34.110 Cannot find device "nvmf_init_br" 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:34.110 Cannot find device "nvmf_init_br2" 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:34.110 Cannot find device "nvmf_tgt_br" 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:34.110 Cannot find device "nvmf_tgt_br2" 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:34.110 Cannot find device "nvmf_br" 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:34.110 Cannot find device "nvmf_init_if" 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:34.110 Cannot find device "nvmf_init_if2" 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:34.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:34.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.110 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:34.368 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.368 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.368 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.368 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:34.368 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:34.368 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:34.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:23:34.369 00:23:34.369 --- 10.0.0.3 ping statistics --- 00:23:34.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.369 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:34.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:34.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:23:34.369 00:23:34.369 --- 10.0.0.4 ping statistics --- 00:23:34.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.369 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:23:34.369 00:23:34.369 --- 10.0.0.1 ping statistics --- 00:23:34.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.369 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:34.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:23:34.369 00:23:34.369 --- 10.0.0.2 ping statistics --- 00:23:34.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.369 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=102697 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 102697 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 102697 ']' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.369 17:44:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:34.369 [2024-11-20 17:44:16.090188] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:34.369 [2024-11-20 17:44:16.090291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.627 [2024-11-20 17:44:16.235595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.627 [2024-11-20 17:44:16.371029] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.627 [2024-11-20 17:44:16.371115] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.627 [2024-11-20 17:44:16.371130] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.627 [2024-11-20 17:44:16.371141] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.627 [2024-11-20 17:44:16.371151] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.627 [2024-11-20 17:44:16.371188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ALY 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ALY 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ALY 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ALY 00:23:35.560 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:35.819 [2024-11-20 17:44:17.534385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.819 [2024-11-20 17:44:17.550308] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.819 [2024-11-20 17:44:17.550578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:36.077 malloc0 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=102757 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 102757 /var/tmp/bdevperf.sock 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 102757 ']' 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.077 17:44:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:36.077 [2024-11-20 17:44:17.730332] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:36.077 [2024-11-20 17:44:17.730454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102757 ] 00:23:36.341 [2024-11-20 17:44:17.874895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.341 [2024-11-20 17:44:18.007914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.306 17:44:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.306 17:44:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:37.306 17:44:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ALY 00:23:37.567 17:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.824 [2024-11-20 17:44:19.443512] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.824 TLSTESTn1 00:23:37.824 17:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.082 Running I/O for 10 seconds... 00:23:40.003 3626.00 IOPS, 14.16 MiB/s [2024-11-20T17:44:22.743Z] 3711.00 IOPS, 14.50 MiB/s [2024-11-20T17:44:24.118Z] 3774.00 IOPS, 14.74 MiB/s [2024-11-20T17:44:25.052Z] 3822.75 IOPS, 14.93 MiB/s [2024-11-20T17:44:26.025Z] 3853.80 IOPS, 15.05 MiB/s [2024-11-20T17:44:26.960Z] 3879.17 IOPS, 15.15 MiB/s [2024-11-20T17:44:27.895Z] 3860.43 IOPS, 15.08 MiB/s [2024-11-20T17:44:28.828Z] 3880.62 IOPS, 15.16 MiB/s [2024-11-20T17:44:29.827Z] 3904.33 IOPS, 15.25 MiB/s [2024-11-20T17:44:29.827Z] 3917.60 IOPS, 15.30 MiB/s 00:23:48.039 Latency(us) 00:23:48.039 [2024-11-20T17:44:29.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.039 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.039 Verification LBA range: start 0x0 length 0x2000 00:23:48.039 TLSTESTn1 : 10.02 3923.66 15.33 0.00 0.00 32563.45 5928.03 36461.85 00:23:48.039 [2024-11-20T17:44:29.827Z] =================================================================================================================== 00:23:48.039 [2024-11-20T17:44:29.827Z] Total : 3923.66 15.33 0.00 0.00 32563.45 5928.03 36461.85 00:23:48.039 { 00:23:48.039 "results": [ 00:23:48.039 { 00:23:48.039 "job": "TLSTESTn1", 00:23:48.039 "core_mask": "0x4", 00:23:48.039 "workload": "verify", 00:23:48.039 "status": "finished", 00:23:48.039 "verify_range": { 00:23:48.039 "start": 0, 00:23:48.039 "length": 8192 00:23:48.039 }, 00:23:48.039 "queue_depth": 128, 00:23:48.039 "io_size": 4096, 00:23:48.039 "runtime": 10.016914, 00:23:48.039 "iops": 3923.6635155298327, 00:23:48.039 "mibps": 15.326810607538409, 00:23:48.039 "io_failed": 0, 00:23:48.039 "io_timeout": 0, 00:23:48.039 "avg_latency_us": 32563.447560283395, 00:23:48.039 "min_latency_us": 5928.029090909091, 00:23:48.039 "max_latency_us": 36461.847272727275 00:23:48.039 } 00:23:48.039 ], 00:23:48.039 "core_count": 1 00:23:48.039 } 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:48.039 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:48.039 nvmf_trace.0 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 102757 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 102757 ']' 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 102757 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102757 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:48.298 killing process with pid 102757 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102757' 00:23:48.298 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.298 00:23:48.298 Latency(us) 00:23:48.298 [2024-11-20T17:44:30.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.298 [2024-11-20T17:44:30.086Z] =================================================================================================================== 00:23:48.298 [2024-11-20T17:44:30.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 102757 00:23:48.298 17:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 102757 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.556 rmmod nvme_tcp 00:23:48.556 rmmod nvme_fabrics 00:23:48.556 rmmod nvme_keyring 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 102697 ']' 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 102697 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 102697 ']' 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 102697 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102697 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:48.556 killing process with pid 102697 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102697' 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 102697 00:23:48.556 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 102697 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.123 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.381 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ALY 00:23:49.382 00:23:49.382 real 0m15.777s 00:23:49.382 user 0m22.507s 00:23:49.382 sys 0m5.893s 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.382 ************************************ 00:23:49.382 END TEST nvmf_fips 00:23:49.382 ************************************ 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:49.382 ************************************ 00:23:49.382 START TEST nvmf_control_msg_list 00:23:49.382 ************************************ 00:23:49.382 17:44:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:49.382 * Looking for test storage... 00:23:49.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.382 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:49.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.641 --rc genhtml_branch_coverage=1 00:23:49.641 --rc genhtml_function_coverage=1 00:23:49.641 --rc genhtml_legend=1 00:23:49.641 --rc geninfo_all_blocks=1 00:23:49.641 --rc geninfo_unexecuted_blocks=1 00:23:49.641 00:23:49.641 ' 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:49.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.641 --rc genhtml_branch_coverage=1 00:23:49.641 --rc genhtml_function_coverage=1 00:23:49.641 --rc genhtml_legend=1 00:23:49.641 --rc geninfo_all_blocks=1 00:23:49.641 --rc geninfo_unexecuted_blocks=1 00:23:49.641 00:23:49.641 ' 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:49.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.641 --rc genhtml_branch_coverage=1 00:23:49.641 --rc genhtml_function_coverage=1 00:23:49.641 --rc genhtml_legend=1 00:23:49.641 --rc geninfo_all_blocks=1 00:23:49.641 --rc geninfo_unexecuted_blocks=1 00:23:49.641 00:23:49.641 ' 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:49.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.641 --rc genhtml_branch_coverage=1 00:23:49.641 --rc genhtml_function_coverage=1 00:23:49.641 --rc genhtml_legend=1 00:23:49.641 --rc geninfo_all_blocks=1 00:23:49.641 --rc geninfo_unexecuted_blocks=1 00:23:49.641 00:23:49.641 ' 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.641 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.642 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:49.642 Cannot find device "nvmf_init_br" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:49.642 Cannot find device "nvmf_init_br2" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:49.642 Cannot find device "nvmf_tgt_br" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:49.642 Cannot find device "nvmf_tgt_br2" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:49.642 Cannot find device "nvmf_init_br" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:49.642 Cannot find device "nvmf_init_br2" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:49.642 Cannot find device "nvmf_tgt_br" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:49.642 Cannot find device "nvmf_tgt_br2" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:49.642 Cannot find device "nvmf_br" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:49.642 Cannot find device "nvmf_init_if" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:49.642 Cannot find device "nvmf_init_if2" 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:49.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:49.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:49.642 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:49.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:49.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:23:49.902 00:23:49.902 --- 10.0.0.3 ping statistics --- 00:23:49.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.902 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:49.902 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:49.902 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:23:49.902 00:23:49.902 --- 10.0.0.4 ping statistics --- 00:23:49.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.902 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:49.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:49.902 00:23:49.902 --- 10.0.0.1 ping statistics --- 00:23:49.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.902 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:49.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:23:49.902 00:23:49.902 --- 10.0.0.2 ping statistics --- 00:23:49.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.902 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=103167 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 103167 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 103167 ']' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.902 17:44:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:50.162 [2024-11-20 17:44:31.700824] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:50.162 [2024-11-20 17:44:31.700985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.162 [2024-11-20 17:44:31.840451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.421 [2024-11-20 17:44:31.984611] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.421 [2024-11-20 17:44:31.984694] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.421 [2024-11-20 17:44:31.984708] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.421 [2024-11-20 17:44:31.984720] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.421 [2024-11-20 17:44:31.984729] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.421 [2024-11-20 17:44:31.984763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.989 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.989 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:23:50.989 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:50.989 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:50.989 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 [2024-11-20 17:44:32.825856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 Malloc0 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 [2024-11-20 17:44:32.872827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=103218 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=103219 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=103221 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 103218 00:23:51.247 17:44:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:51.506 [2024-11-20 17:44:33.057465] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:51.506 [2024-11-20 17:44:33.057758] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:51.506 [2024-11-20 17:44:33.082708] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:52.441 Initializing NVMe Controllers 00:23:52.441 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:52.441 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:52.441 Initialization complete. Launching workers. 00:23:52.441 ======================================================== 00:23:52.441 Latency(us) 00:23:52.441 Device Information : IOPS MiB/s Average min max 00:23:52.441 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3318.00 12.96 300.95 162.64 521.02 00:23:52.441 ======================================================== 00:23:52.441 Total : 3318.00 12.96 300.95 162.64 521.02 00:23:52.441 00:23:52.441 Initializing NVMe Controllers 00:23:52.441 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:52.441 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:52.441 Initialization complete. Launching workers. 00:23:52.441 ======================================================== 00:23:52.441 Latency(us) 00:23:52.441 Device Information : IOPS MiB/s Average min max 00:23:52.441 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3312.00 12.94 301.58 167.15 581.12 00:23:52.441 ======================================================== 00:23:52.441 Total : 3312.00 12.94 301.58 167.15 581.12 00:23:52.441 00:23:52.441 Initializing NVMe Controllers 00:23:52.441 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:52.441 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:52.441 Initialization complete. Launching workers. 00:23:52.441 ======================================================== 00:23:52.441 Latency(us) 00:23:52.441 Device Information : IOPS MiB/s Average min max 00:23:52.441 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3340.00 13.05 299.04 129.17 414.41 00:23:52.441 ======================================================== 00:23:52.441 Total : 3340.00 13.05 299.04 129.17 414.41 00:23:52.441 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 103219 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 103221 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.441 rmmod nvme_tcp 00:23:52.441 rmmod nvme_fabrics 00:23:52.441 rmmod nvme_keyring 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 103167 ']' 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 103167 00:23:52.441 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 103167 ']' 00:23:52.442 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 103167 00:23:52.442 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:23:52.442 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.442 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103167 00:23:52.699 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:52.700 killing process with pid 103167 00:23:52.700 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:52.700 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103167' 00:23:52.700 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 103167 00:23:52.700 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 103167 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:52.958 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:23:53.215 ************************************ 00:23:53.215 END TEST nvmf_control_msg_list 00:23:53.215 ************************************ 00:23:53.215 00:23:53.215 real 0m3.839s 00:23:53.215 user 0m5.811s 00:23:53.215 sys 0m1.556s 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.215 ************************************ 00:23:53.215 START TEST nvmf_wait_for_buf 00:23:53.215 ************************************ 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:53.215 * Looking for test storage... 00:23:53.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:53.215 17:44:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:53.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.475 --rc genhtml_branch_coverage=1 00:23:53.475 --rc genhtml_function_coverage=1 00:23:53.475 --rc genhtml_legend=1 00:23:53.475 --rc geninfo_all_blocks=1 00:23:53.475 --rc geninfo_unexecuted_blocks=1 00:23:53.475 00:23:53.475 ' 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:53.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.475 --rc genhtml_branch_coverage=1 00:23:53.475 --rc genhtml_function_coverage=1 00:23:53.475 --rc genhtml_legend=1 00:23:53.475 --rc geninfo_all_blocks=1 00:23:53.475 --rc geninfo_unexecuted_blocks=1 00:23:53.475 00:23:53.475 ' 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:53.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.475 --rc genhtml_branch_coverage=1 00:23:53.475 --rc genhtml_function_coverage=1 00:23:53.475 --rc genhtml_legend=1 00:23:53.475 --rc geninfo_all_blocks=1 00:23:53.475 --rc geninfo_unexecuted_blocks=1 00:23:53.475 00:23:53.475 ' 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:53.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.475 --rc genhtml_branch_coverage=1 00:23:53.475 --rc genhtml_function_coverage=1 00:23:53.475 --rc genhtml_legend=1 00:23:53.475 --rc geninfo_all_blocks=1 00:23:53.475 --rc geninfo_unexecuted_blocks=1 00:23:53.475 00:23:53.475 ' 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:23:53.475 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.476 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:53.476 Cannot find device "nvmf_init_br" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:53.476 Cannot find device "nvmf_init_br2" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:53.476 Cannot find device "nvmf_tgt_br" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:53.476 Cannot find device "nvmf_tgt_br2" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:53.476 Cannot find device "nvmf_init_br" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:53.476 Cannot find device "nvmf_init_br2" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:53.476 Cannot find device "nvmf_tgt_br" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:53.476 Cannot find device "nvmf_tgt_br2" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:53.476 Cannot find device "nvmf_br" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:53.476 Cannot find device "nvmf_init_if" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:53.476 Cannot find device "nvmf_init_if2" 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:53.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:23:53.476 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:53.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:53.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:53.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:23:53.736 00:23:53.736 --- 10.0.0.3 ping statistics --- 00:23:53.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.736 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:53.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:53.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:23:53.736 00:23:53.736 --- 10.0.0.4 ping statistics --- 00:23:53.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.736 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:53.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:23:53.736 00:23:53.736 --- 10.0.0.1 ping statistics --- 00:23:53.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.736 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:53.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:23:53.736 00:23:53.736 --- 10.0.0.2 ping statistics --- 00:23:53.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.736 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:53.736 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=103456 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 103456 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 103456 ']' 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.994 17:44:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:53.994 [2024-11-20 17:44:35.644153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:53.994 [2024-11-20 17:44:35.645201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.253 [2024-11-20 17:44:35.788102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.253 [2024-11-20 17:44:35.936233] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.253 [2024-11-20 17:44:35.936316] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.253 [2024-11-20 17:44:35.936332] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.253 [2024-11-20 17:44:35.936343] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.253 [2024-11-20 17:44:35.936352] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.253 [2024-11-20 17:44:35.936399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.188 Malloc0 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.188 [2024-11-20 17:44:36.882617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.188 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.189 [2024-11-20 17:44:36.910780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.189 17:44:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:55.447 [2024-11-20 17:44:37.093234] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:56.821 Initializing NVMe Controllers 00:23:56.821 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:56.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:56.821 Initialization complete. Launching workers. 00:23:56.821 ======================================================== 00:23:56.821 Latency(us) 00:23:56.821 Device Information : IOPS MiB/s Average min max 00:23:56.821 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.00 16.00 32328.10 8001.79 62104.35 00:23:56.821 ======================================================== 00:23:56.821 Total : 128.00 16.00 32328.10 8001.79 62104.35 00:23:56.821 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.821 rmmod nvme_tcp 00:23:56.821 rmmod nvme_fabrics 00:23:56.821 rmmod nvme_keyring 00:23:56.821 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 103456 ']' 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 103456 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 103456 ']' 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 103456 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103456 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:57.080 killing process with pid 103456 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103456' 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 103456 00:23:57.080 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 103456 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:57.338 17:44:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:57.338 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:57.338 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:57.338 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:57.338 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:57.338 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:57.338 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:23:57.678 00:23:57.678 real 0m4.304s 00:23:57.678 user 0m3.859s 00:23:57.678 sys 0m0.920s 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.678 ************************************ 00:23:57.678 END TEST nvmf_wait_for_buf 00:23:57.678 ************************************ 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.678 17:44:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.678 ************************************ 00:23:57.678 START TEST nvmf_fuzz 00:23:57.678 ************************************ 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:57.679 * Looking for test storage... 00:23:57.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.679 --rc genhtml_branch_coverage=1 00:23:57.679 --rc genhtml_function_coverage=1 00:23:57.679 --rc genhtml_legend=1 00:23:57.679 --rc geninfo_all_blocks=1 00:23:57.679 --rc geninfo_unexecuted_blocks=1 00:23:57.679 00:23:57.679 ' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.679 --rc genhtml_branch_coverage=1 00:23:57.679 --rc genhtml_function_coverage=1 00:23:57.679 --rc genhtml_legend=1 00:23:57.679 --rc geninfo_all_blocks=1 00:23:57.679 --rc geninfo_unexecuted_blocks=1 00:23:57.679 00:23:57.679 ' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.679 --rc genhtml_branch_coverage=1 00:23:57.679 --rc genhtml_function_coverage=1 00:23:57.679 --rc genhtml_legend=1 00:23:57.679 --rc geninfo_all_blocks=1 00:23:57.679 --rc geninfo_unexecuted_blocks=1 00:23:57.679 00:23:57.679 ' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.679 --rc genhtml_branch_coverage=1 00:23:57.679 --rc genhtml_function_coverage=1 00:23:57.679 --rc genhtml_legend=1 00:23:57.679 --rc geninfo_all_blocks=1 00:23:57.679 --rc geninfo_unexecuted_blocks=1 00:23:57.679 00:23:57.679 ' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:57.679 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:57.680 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:57.938 Cannot find device "nvmf_init_br" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:57.938 Cannot find device "nvmf_init_br2" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:57.938 Cannot find device "nvmf_tgt_br" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:57.938 Cannot find device "nvmf_tgt_br2" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:57.938 Cannot find device "nvmf_init_br" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:57.938 Cannot find device "nvmf_init_br2" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:57.938 Cannot find device "nvmf_tgt_br" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:57.938 Cannot find device "nvmf_tgt_br2" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:57.938 Cannot find device "nvmf_br" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:57.938 Cannot find device "nvmf_init_if" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:57.938 Cannot find device "nvmf_init_if2" 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:57.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:57.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:57.938 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:57.939 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:57.939 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:57.939 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:57.939 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:57.939 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:57.939 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:57.939 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:58.197 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:58.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:58.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:23:58.198 00:23:58.198 --- 10.0.0.3 ping statistics --- 00:23:58.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.198 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:58.198 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:58.198 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:23:58.198 00:23:58.198 --- 10.0.0.4 ping statistics --- 00:23:58.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.198 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:58.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:23:58.198 00:23:58.198 --- 10.0.0.1 ping statistics --- 00:23:58.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.198 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:58.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:23:58.198 00:23:58.198 --- 10.0.0.2 ping statistics --- 00:23:58.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.198 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=103765 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 103765 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 103765 ']' 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.198 17:44:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.575 17:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 Malloc0 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:23:59.575 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:23:59.834 Shutting down the fuzz application 00:23:59.834 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:00.093 Shutting down the fuzz application 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.093 rmmod nvme_tcp 00:24:00.093 rmmod nvme_fabrics 00:24:00.093 rmmod nvme_keyring 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 103765 ']' 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 103765 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 103765 ']' 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 103765 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103765 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103765' 00:24:00.093 killing process with pid 103765 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 103765 00:24:00.093 17:44:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 103765 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.661 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:24:00.989 ************************************ 00:24:00.989 END TEST nvmf_fuzz 00:24:00.989 ************************************ 00:24:00.989 00:24:00.989 real 0m3.222s 00:24:00.989 user 0m3.088s 00:24:00.989 sys 0m0.874s 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:00.989 ************************************ 00:24:00.989 START TEST nvmf_multiconnection 00:24:00.989 ************************************ 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:00.989 * Looking for test storage... 00:24:00.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:00.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.989 --rc genhtml_branch_coverage=1 00:24:00.989 --rc genhtml_function_coverage=1 00:24:00.989 --rc genhtml_legend=1 00:24:00.989 --rc geninfo_all_blocks=1 00:24:00.989 --rc geninfo_unexecuted_blocks=1 00:24:00.989 00:24:00.989 ' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:00.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.989 --rc genhtml_branch_coverage=1 00:24:00.989 --rc genhtml_function_coverage=1 00:24:00.989 --rc genhtml_legend=1 00:24:00.989 --rc geninfo_all_blocks=1 00:24:00.989 --rc geninfo_unexecuted_blocks=1 00:24:00.989 00:24:00.989 ' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:00.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.989 --rc genhtml_branch_coverage=1 00:24:00.989 --rc genhtml_function_coverage=1 00:24:00.989 --rc genhtml_legend=1 00:24:00.989 --rc geninfo_all_blocks=1 00:24:00.989 --rc geninfo_unexecuted_blocks=1 00:24:00.989 00:24:00.989 ' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:00.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.989 --rc genhtml_branch_coverage=1 00:24:00.989 --rc genhtml_function_coverage=1 00:24:00.989 --rc genhtml_legend=1 00:24:00.989 --rc geninfo_all_blocks=1 00:24:00.989 --rc geninfo_unexecuted_blocks=1 00:24:00.989 00:24:00.989 ' 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:24:00.989 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:00.990 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:01.268 Cannot find device "nvmf_init_br" 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:01.268 Cannot find device "nvmf_init_br2" 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:01.268 Cannot find device "nvmf_tgt_br" 00:24:01.268 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:01.269 Cannot find device "nvmf_tgt_br2" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:01.269 Cannot find device "nvmf_init_br" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:01.269 Cannot find device "nvmf_init_br2" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:01.269 Cannot find device "nvmf_tgt_br" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:01.269 Cannot find device "nvmf_tgt_br2" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:01.269 Cannot find device "nvmf_br" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:01.269 Cannot find device "nvmf_init_if" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:01.269 Cannot find device "nvmf_init_if2" 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:01.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:01.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:01.269 17:44:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:01.269 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:01.269 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:01.269 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:01.269 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:01.269 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:01.269 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:01.528 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:01.528 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.145 ms 00:24:01.528 00:24:01.528 --- 10.0.0.3 ping statistics --- 00:24:01.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.528 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:01.528 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:01.528 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:24:01.528 00:24:01.528 --- 10.0.0.4 ping statistics --- 00:24:01.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.528 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:01.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:24:01.528 00:24:01.528 --- 10.0.0.1 ping statistics --- 00:24:01.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.528 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:01.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:24:01.528 00:24:01.528 --- 10.0.0.2 ping statistics --- 00:24:01.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.528 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.528 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=104037 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 104037 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 104037 ']' 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.529 17:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.529 [2024-11-20 17:44:43.194739] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:01.529 [2024-11-20 17:44:43.194836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.788 [2024-11-20 17:44:43.334742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.788 [2024-11-20 17:44:43.467484] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.788 [2024-11-20 17:44:43.467560] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.788 [2024-11-20 17:44:43.467572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.788 [2024-11-20 17:44:43.467580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.788 [2024-11-20 17:44:43.467588] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.788 [2024-11-20 17:44:43.468933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.788 [2024-11-20 17:44:43.469123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.788 [2024-11-20 17:44:43.469293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.788 [2024-11-20 17:44:43.469650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.723 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.723 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:24:02.723 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:02.723 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.723 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 [2024-11-20 17:44:44.272008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 Malloc1 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 [2024-11-20 17:44:44.342380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 Malloc2 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 Malloc3 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 Malloc4 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.724 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.983 Malloc5 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.983 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.983 Malloc6 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 Malloc7 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 Malloc8 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.984 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 Malloc9 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 Malloc10 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 Malloc11 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.243 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:03.244 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:03.244 17:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:24:03.502 17:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:03.502 17:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:03.502 17:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:03.502 17:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:03.502 17:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:05.406 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:24:05.665 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:05.665 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:05.665 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:05.665 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:05.665 17:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.566 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:24:07.824 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:07.824 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.824 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.824 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:07.824 17:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:09.726 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:09.727 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:09.727 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:09.985 17:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:11.930 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:11.930 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:11.930 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:12.188 17:44:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.720 17:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:24:14.720 17:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:14.720 17:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:14.720 17:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:14.720 17:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:14.720 17:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:16.623 17:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:19.150 17:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:21.047 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:21.047 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:21.048 17:45:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:22.944 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:23.202 17:45:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.729 17:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:24:25.729 17:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:25.729 17:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:25.729 17:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.729 17:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:25.729 17:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:27.628 17:45:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:27.628 17:45:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:27.628 17:45:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:27.628 17:45:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:27.628 17:45:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.628 17:45:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:27.628 17:45:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:27.628 [global] 00:24:27.628 thread=1 00:24:27.628 invalidate=1 00:24:27.628 rw=read 00:24:27.628 time_based=1 00:24:27.628 runtime=10 00:24:27.628 ioengine=libaio 00:24:27.628 direct=1 00:24:27.628 bs=262144 00:24:27.628 iodepth=64 00:24:27.628 norandommap=1 00:24:27.628 numjobs=1 00:24:27.628 00:24:27.628 [job0] 00:24:27.628 filename=/dev/nvme0n1 00:24:27.628 [job1] 00:24:27.628 filename=/dev/nvme10n1 00:24:27.628 [job2] 00:24:27.628 filename=/dev/nvme1n1 00:24:27.628 [job3] 00:24:27.628 filename=/dev/nvme2n1 00:24:27.628 [job4] 00:24:27.628 filename=/dev/nvme3n1 00:24:27.628 [job5] 00:24:27.628 filename=/dev/nvme4n1 00:24:27.628 [job6] 00:24:27.628 filename=/dev/nvme5n1 00:24:27.628 [job7] 00:24:27.628 filename=/dev/nvme6n1 00:24:27.628 [job8] 00:24:27.628 filename=/dev/nvme7n1 00:24:27.628 [job9] 00:24:27.628 filename=/dev/nvme8n1 00:24:27.628 [job10] 00:24:27.628 filename=/dev/nvme9n1 00:24:27.628 Could not set queue depth (nvme0n1) 00:24:27.628 Could not set queue depth (nvme10n1) 00:24:27.628 Could not set queue depth (nvme1n1) 00:24:27.628 Could not set queue depth (nvme2n1) 00:24:27.628 Could not set queue depth (nvme3n1) 00:24:27.628 Could not set queue depth (nvme4n1) 00:24:27.628 Could not set queue depth (nvme5n1) 00:24:27.628 Could not set queue depth (nvme6n1) 00:24:27.628 Could not set queue depth (nvme7n1) 00:24:27.628 Could not set queue depth (nvme8n1) 00:24:27.628 Could not set queue depth (nvme9n1) 00:24:27.886 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.886 fio-3.35 00:24:27.886 Starting 11 threads 00:24:40.159 00:24:40.159 job0: (groupid=0, jobs=1): err= 0: pid=104506: Wed Nov 20 17:45:19 2024 00:24:40.159 read: IOPS=71, BW=18.0MiB/s (18.8MB/s)(184MiB/10208msec) 00:24:40.159 slat (usec): min=11, max=984854, avg=13710.84, stdev=63916.43 00:24:40.159 clat (msec): min=26, max=1604, avg=873.68, stdev=316.56 00:24:40.159 lat (msec): min=26, max=1905, avg=887.39, stdev=319.11 00:24:40.159 clat percentiles (msec): 00:24:40.159 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 592], 20.00th=[ 718], 00:24:40.159 | 30.00th=[ 802], 40.00th=[ 844], 50.00th=[ 885], 60.00th=[ 927], 00:24:40.159 | 70.00th=[ 961], 80.00th=[ 1011], 90.00th=[ 1200], 95.00th=[ 1569], 00:24:40.159 | 99.00th=[ 1603], 99.50th=[ 1603], 99.90th=[ 1603], 99.95th=[ 1603], 00:24:40.159 | 99.99th=[ 1603] 00:24:40.159 bw ( KiB/s): min= 6656, max=28614, per=3.28%, avg=18049.11, stdev=6588.66, samples=19 00:24:40.159 iops : min= 26, max= 111, avg=70.42, stdev=25.63, samples=19 00:24:40.159 lat (msec) : 50=5.45%, 250=0.14%, 500=2.72%, 750=12.26%, 1000=56.54% 00:24:40.159 lat (msec) : 2000=22.89% 00:24:40.159 cpu : usr=0.02%, sys=0.49%, ctx=113, majf=0, minf=4097 00:24:40.159 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:24:40.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.159 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.159 issued rwts: total=734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.159 job1: (groupid=0, jobs=1): err= 0: pid=104507: Wed Nov 20 17:45:19 2024 00:24:40.159 read: IOPS=138, BW=34.5MiB/s (36.2MB/s)(353MiB/10208msec) 00:24:40.159 slat (usec): min=16, max=514344, avg=6579.50, stdev=30235.09 00:24:40.159 clat (msec): min=19, max=1074, avg=455.26, stdev=128.72 00:24:40.159 lat (msec): min=19, max=1074, avg=461.84, stdev=132.78 00:24:40.159 clat percentiles (msec): 00:24:40.159 | 1.00th=[ 79], 5.00th=[ 148], 10.00th=[ 259], 20.00th=[ 409], 00:24:40.159 | 30.00th=[ 435], 40.00th=[ 451], 50.00th=[ 468], 60.00th=[ 485], 00:24:40.159 | 70.00th=[ 502], 80.00th=[ 535], 90.00th=[ 609], 95.00th=[ 634], 00:24:40.159 | 99.00th=[ 701], 99.50th=[ 726], 99.90th=[ 751], 99.95th=[ 1083], 00:24:40.159 | 99.99th=[ 1083] 00:24:40.159 bw ( KiB/s): min=23504, max=56207, per=6.26%, avg=34466.70, stdev=6461.87, samples=20 00:24:40.159 iops : min= 91, max= 219, avg=134.45, stdev=25.24, samples=20 00:24:40.159 lat (msec) : 20=0.21%, 50=0.07%, 100=2.41%, 250=6.03%, 500=60.92% 00:24:40.159 lat (msec) : 750=30.28%, 2000=0.07% 00:24:40.159 cpu : usr=0.08%, sys=0.78%, ctx=289, majf=0, minf=4097 00:24:40.159 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:24:40.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.159 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.159 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.159 job2: (groupid=0, jobs=1): err= 0: pid=104508: Wed Nov 20 17:45:19 2024 00:24:40.159 read: IOPS=139, BW=34.9MiB/s (36.6MB/s)(357MiB/10217msec) 00:24:40.159 slat (usec): min=17, max=219472, avg=6604.44, stdev=23575.65 00:24:40.159 clat (msec): min=38, max=801, avg=449.95, stdev=128.21 00:24:40.159 lat (msec): min=38, max=880, avg=456.55, stdev=131.57 00:24:40.159 clat percentiles (msec): 00:24:40.159 | 1.00th=[ 72], 5.00th=[ 178], 10.00th=[ 251], 20.00th=[ 409], 00:24:40.159 | 30.00th=[ 443], 40.00th=[ 464], 50.00th=[ 472], 60.00th=[ 485], 00:24:40.159 | 70.00th=[ 506], 80.00th=[ 527], 90.00th=[ 567], 95.00th=[ 617], 00:24:40.159 | 99.00th=[ 751], 99.50th=[ 751], 99.90th=[ 802], 99.95th=[ 802], 00:24:40.159 | 99.99th=[ 802] 00:24:40.159 bw ( KiB/s): min=22060, max=66048, per=6.35%, avg=34927.80, stdev=9864.50, samples=20 00:24:40.159 iops : min= 86, max= 258, avg=136.25, stdev=38.59, samples=20 00:24:40.159 lat (msec) : 50=0.56%, 100=2.03%, 250=7.07%, 500=56.51%, 750=33.61% 00:24:40.159 lat (msec) : 1000=0.21% 00:24:40.159 cpu : usr=0.11%, sys=0.71%, ctx=280, majf=0, minf=4097 00:24:40.159 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:24:40.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.159 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.159 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.159 job3: (groupid=0, jobs=1): err= 0: pid=104509: Wed Nov 20 17:45:19 2024 00:24:40.159 read: IOPS=154, BW=38.7MiB/s (40.6MB/s)(395MiB/10214msec) 00:24:40.159 slat (usec): min=16, max=269531, avg=6273.07, stdev=22747.73 00:24:40.159 clat (msec): min=41, max=760, avg=405.82, stdev=154.05 00:24:40.159 lat (msec): min=41, max=760, avg=412.09, stdev=157.34 00:24:40.159 clat percentiles (msec): 00:24:40.159 | 1.00th=[ 51], 5.00th=[ 113], 10.00th=[ 148], 20.00th=[ 228], 00:24:40.159 | 30.00th=[ 330], 40.00th=[ 443], 50.00th=[ 464], 60.00th=[ 493], 00:24:40.159 | 70.00th=[ 510], 80.00th=[ 527], 90.00th=[ 550], 95.00th=[ 575], 00:24:40.159 | 99.00th=[ 625], 99.50th=[ 667], 99.90th=[ 760], 99.95th=[ 760], 00:24:40.159 | 99.99th=[ 760] 00:24:40.159 bw ( KiB/s): min=28614, max=88064, per=7.06%, avg=38847.75, stdev=17249.20, samples=20 00:24:40.159 iops : min= 111, max= 344, avg=151.60, stdev=67.44, samples=20 00:24:40.159 lat (msec) : 50=1.01%, 100=3.23%, 250=19.42%, 500=40.73%, 750=35.42% 00:24:40.159 lat (msec) : 1000=0.19% 00:24:40.159 cpu : usr=0.06%, sys=0.78%, ctx=422, majf=0, minf=4097 00:24:40.159 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:24:40.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.159 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.159 issued rwts: total=1581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.159 job4: (groupid=0, jobs=1): err= 0: pid=104510: Wed Nov 20 17:45:19 2024 00:24:40.159 read: IOPS=507, BW=127MiB/s (133MB/s)(1281MiB/10094msec) 00:24:40.159 slat (usec): min=17, max=95725, avg=1849.26, stdev=8126.60 00:24:40.159 clat (msec): min=4, max=573, avg=123.84, stdev=42.04 00:24:40.159 lat (msec): min=39, max=574, avg=125.69, stdev=42.44 00:24:40.159 clat percentiles (msec): 00:24:40.159 | 1.00th=[ 72], 5.00th=[ 88], 10.00th=[ 93], 20.00th=[ 101], 00:24:40.159 | 30.00th=[ 106], 40.00th=[ 111], 50.00th=[ 116], 60.00th=[ 122], 00:24:40.159 | 70.00th=[ 128], 80.00th=[ 140], 90.00th=[ 161], 95.00th=[ 184], 00:24:40.159 | 99.00th=[ 334], 99.50th=[ 368], 99.90th=[ 506], 99.95th=[ 510], 00:24:40.159 | 99.99th=[ 575] 00:24:40.160 bw ( KiB/s): min=31169, max=167936, per=23.55%, avg=129607.85, stdev=29334.29, samples=20 00:24:40.160 iops : min= 121, max= 656, avg=506.05, stdev=114.72, samples=20 00:24:40.160 lat (msec) : 10=0.02%, 50=0.21%, 100=20.12%, 250=78.15%, 500=1.23% 00:24:40.160 lat (msec) : 750=0.27% 00:24:40.160 cpu : usr=0.20%, sys=1.90%, ctx=1058, majf=0, minf=4097 00:24:40.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:40.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.160 issued rwts: total=5125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.160 job5: (groupid=0, jobs=1): err= 0: pid=104511: Wed Nov 20 17:45:19 2024 00:24:40.160 read: IOPS=76, BW=19.1MiB/s (20.0MB/s)(195MiB/10213msec) 00:24:40.160 slat (usec): min=20, max=415433, avg=12986.32, stdev=47242.26 00:24:40.160 clat (msec): min=54, max=1317, avg=823.48, stdev=245.71 00:24:40.160 lat (msec): min=55, max=1317, avg=836.47, stdev=247.16 00:24:40.160 clat percentiles (msec): 00:24:40.160 | 1.00th=[ 90], 5.00th=[ 101], 10.00th=[ 609], 20.00th=[ 709], 00:24:40.160 | 30.00th=[ 776], 40.00th=[ 818], 50.00th=[ 860], 60.00th=[ 902], 00:24:40.160 | 70.00th=[ 944], 80.00th=[ 1003], 90.00th=[ 1053], 95.00th=[ 1133], 00:24:40.160 | 99.00th=[ 1301], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:24:40.160 | 99.99th=[ 1318] 00:24:40.160 bw ( KiB/s): min=11264, max=24112, per=3.33%, avg=18336.20, stdev=4118.03, samples=20 00:24:40.160 iops : min= 44, max= 94, avg=71.45, stdev=16.16, samples=20 00:24:40.160 lat (msec) : 100=4.49%, 250=1.41%, 500=2.18%, 750=18.46%, 1000=52.95% 00:24:40.160 lat (msec) : 2000=20.51% 00:24:40.160 cpu : usr=0.04%, sys=0.52%, ctx=102, majf=0, minf=4097 00:24:40.160 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:24:40.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.160 issued rwts: total=780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.160 job6: (groupid=0, jobs=1): err= 0: pid=104512: Wed Nov 20 17:45:19 2024 00:24:40.160 read: IOPS=158, BW=39.7MiB/s (41.6MB/s)(406MiB/10210msec) 00:24:40.160 slat (usec): min=14, max=218927, avg=6007.48, stdev=20166.42 00:24:40.160 clat (msec): min=24, max=636, avg=395.57, stdev=141.63 00:24:40.160 lat (msec): min=24, max=707, avg=401.57, stdev=144.97 00:24:40.160 clat percentiles (msec): 00:24:40.160 | 1.00th=[ 30], 5.00th=[ 148], 10.00th=[ 192], 20.00th=[ 234], 00:24:40.160 | 30.00th=[ 330], 40.00th=[ 435], 50.00th=[ 456], 60.00th=[ 468], 00:24:40.160 | 70.00th=[ 485], 80.00th=[ 502], 90.00th=[ 527], 95.00th=[ 558], 00:24:40.160 | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 634], 99.95th=[ 634], 00:24:40.160 | 99.99th=[ 634] 00:24:40.160 bw ( KiB/s): min=28160, max=83456, per=7.25%, avg=39884.80, stdev=16411.80, samples=20 00:24:40.160 iops : min= 110, max= 326, avg=155.75, stdev=64.13, samples=20 00:24:40.160 lat (msec) : 50=2.96%, 100=1.36%, 250=18.37%, 500=56.29%, 750=21.02% 00:24:40.160 cpu : usr=0.09%, sys=0.83%, ctx=302, majf=0, minf=4097 00:24:40.160 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:24:40.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.160 issued rwts: total=1622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.160 job7: (groupid=0, jobs=1): err= 0: pid=104513: Wed Nov 20 17:45:19 2024 00:24:40.160 read: IOPS=133, BW=33.5MiB/s (35.1MB/s)(342MiB/10203msec) 00:24:40.160 slat (usec): min=17, max=580742, avg=7275.75, stdev=31958.35 00:24:40.160 clat (msec): min=41, max=869, avg=469.40, stdev=131.30 00:24:40.160 lat (msec): min=42, max=1189, avg=476.68, stdev=135.12 00:24:40.160 clat percentiles (msec): 00:24:40.160 | 1.00th=[ 47], 5.00th=[ 96], 10.00th=[ 393], 20.00th=[ 418], 00:24:40.160 | 30.00th=[ 439], 40.00th=[ 451], 50.00th=[ 468], 60.00th=[ 485], 00:24:40.160 | 70.00th=[ 502], 80.00th=[ 527], 90.00th=[ 625], 95.00th=[ 667], 00:24:40.160 | 99.00th=[ 844], 99.50th=[ 869], 99.90th=[ 869], 99.95th=[ 869], 00:24:40.160 | 99.99th=[ 869] 00:24:40.160 bw ( KiB/s): min=16896, max=40016, per=6.06%, avg=33342.00, stdev=5213.31, samples=20 00:24:40.160 iops : min= 66, max= 156, avg=130.05, stdev=20.35, samples=20 00:24:40.160 lat (msec) : 50=1.54%, 100=3.51%, 250=0.44%, 500=63.32%, 750=28.99% 00:24:40.160 lat (msec) : 1000=2.20% 00:24:40.160 cpu : usr=0.07%, sys=0.75%, ctx=308, majf=0, minf=4098 00:24:40.160 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:24:40.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.160 issued rwts: total=1366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.160 job8: (groupid=0, jobs=1): err= 0: pid=104514: Wed Nov 20 17:45:19 2024 00:24:40.160 read: IOPS=499, BW=125MiB/s (131MB/s)(1260MiB/10087msec) 00:24:40.160 slat (usec): min=17, max=112077, avg=1960.11, stdev=9076.18 00:24:40.160 clat (msec): min=42, max=286, avg=125.79, stdev=30.41 00:24:40.160 lat (msec): min=42, max=304, avg=127.75, stdev=31.20 00:24:40.160 clat percentiles (msec): 00:24:40.160 | 1.00th=[ 70], 5.00th=[ 90], 10.00th=[ 95], 20.00th=[ 104], 00:24:40.160 | 30.00th=[ 109], 40.00th=[ 114], 50.00th=[ 120], 60.00th=[ 126], 00:24:40.160 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 188], 00:24:40.160 | 99.00th=[ 222], 99.50th=[ 226], 99.90th=[ 264], 99.95th=[ 288], 00:24:40.160 | 99.99th=[ 288] 00:24:40.160 bw ( KiB/s): min=83968, max=157184, per=23.14%, avg=127323.40, stdev=21077.04, samples=20 00:24:40.160 iops : min= 328, max= 614, avg=497.30, stdev=82.31, samples=20 00:24:40.160 lat (msec) : 50=0.18%, 100=16.41%, 250=83.25%, 500=0.16% 00:24:40.160 cpu : usr=0.16%, sys=1.81%, ctx=794, majf=0, minf=4097 00:24:40.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:40.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.160 issued rwts: total=5039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.160 job9: (groupid=0, jobs=1): err= 0: pid=104515: Wed Nov 20 17:45:19 2024 00:24:40.160 read: IOPS=156, BW=39.1MiB/s (41.0MB/s)(399MiB/10215msec) 00:24:40.160 slat (usec): min=20, max=302676, avg=5709.03, stdev=23457.55 00:24:40.160 clat (msec): min=4, max=740, avg=402.30, stdev=168.25 00:24:40.160 lat (msec): min=4, max=807, avg=408.01, stdev=171.54 00:24:40.160 clat percentiles (msec): 00:24:40.160 | 1.00th=[ 10], 5.00th=[ 30], 10.00th=[ 114], 20.00th=[ 228], 00:24:40.160 | 30.00th=[ 422], 40.00th=[ 451], 50.00th=[ 460], 60.00th=[ 472], 00:24:40.160 | 70.00th=[ 485], 80.00th=[ 506], 90.00th=[ 542], 95.00th=[ 625], 00:24:40.160 | 99.00th=[ 701], 99.50th=[ 743], 99.90th=[ 743], 99.95th=[ 743], 00:24:40.160 | 99.99th=[ 743] 00:24:40.160 bw ( KiB/s): min=14336, max=129277, per=7.13%, avg=39244.15, stdev=22980.78, samples=20 00:24:40.160 iops : min= 56, max= 504, avg=153.10, stdev=89.59, samples=20 00:24:40.160 lat (msec) : 10=1.13%, 20=1.44%, 50=3.94%, 100=2.32%, 250=14.21% 00:24:40.160 lat (msec) : 500=55.23%, 750=21.73% 00:24:40.160 cpu : usr=0.06%, sys=0.88%, ctx=345, majf=0, minf=4097 00:24:40.160 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:24:40.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.160 issued rwts: total=1597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.160 job10: (groupid=0, jobs=1): err= 0: pid=104516: Wed Nov 20 17:45:19 2024 00:24:40.160 read: IOPS=125, BW=31.3MiB/s (32.8MB/s)(320MiB/10206msec) 00:24:40.160 slat (usec): min=20, max=368456, avg=7908.86, stdev=30483.05 00:24:40.160 clat (msec): min=50, max=1014, avg=501.60, stdev=111.37 00:24:40.160 lat (msec): min=50, max=1014, avg=509.51, stdev=114.89 00:24:40.160 clat percentiles (msec): 00:24:40.160 | 1.00th=[ 211], 5.00th=[ 338], 10.00th=[ 405], 20.00th=[ 439], 00:24:40.160 | 30.00th=[ 451], 40.00th=[ 468], 50.00th=[ 481], 60.00th=[ 502], 00:24:40.160 | 70.00th=[ 523], 80.00th=[ 575], 90.00th=[ 667], 95.00th=[ 718], 00:24:40.160 | 99.00th=[ 802], 99.50th=[ 802], 99.90th=[ 1011], 99.95th=[ 1011], 00:24:40.160 | 99.99th=[ 1011] 00:24:40.160 bw ( KiB/s): min= 2560, max=38989, per=5.65%, avg=31088.65, stdev=7400.10, samples=20 00:24:40.160 iops : min= 10, max= 152, avg=121.25, stdev=28.87, samples=20 00:24:40.160 lat (msec) : 100=0.55%, 250=0.55%, 500=59.62%, 750=37.17%, 1000=1.96% 00:24:40.160 lat (msec) : 2000=0.16% 00:24:40.160 cpu : usr=0.06%, sys=0.67%, ctx=303, majf=0, minf=4097 00:24:40.160 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:24:40.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.160 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.160 00:24:40.160 Run status group 0 (all jobs): 00:24:40.160 READ: bw=537MiB/s (563MB/s), 18.0MiB/s-127MiB/s (18.8MB/s-133MB/s), io=5490MiB (5757MB), run=10087-10217msec 00:24:40.160 00:24:40.160 Disk stats (read/write): 00:24:40.160 nvme0n1: ios=1341/0, merge=0/0, ticks=1179445/0, in_queue=1179445, util=97.52% 00:24:40.160 nvme10n1: ios=2693/0, merge=0/0, ticks=1219335/0, in_queue=1219335, util=97.75% 00:24:40.160 nvme1n1: ios=2728/0, merge=0/0, ticks=1221665/0, in_queue=1221665, util=97.83% 00:24:40.160 nvme2n1: ios=3034/0, merge=0/0, ticks=1221285/0, in_queue=1221285, util=97.26% 00:24:40.160 nvme3n1: ios=10122/0, merge=0/0, ticks=1235870/0, in_queue=1235870, util=97.14% 00:24:40.160 nvme4n1: ios=1438/0, merge=0/0, ticks=1194227/0, in_queue=1194227, util=98.34% 00:24:40.160 nvme5n1: ios=3117/0, merge=0/0, ticks=1213769/0, in_queue=1213769, util=98.38% 00:24:40.161 nvme6n1: ios=2604/0, merge=0/0, ticks=1221198/0, in_queue=1221198, util=98.47% 00:24:40.161 nvme7n1: ios=9950/0, merge=0/0, ticks=1230018/0, in_queue=1230018, util=97.91% 00:24:40.161 nvme8n1: ios=3067/0, merge=0/0, ticks=1222880/0, in_queue=1222880, util=99.05% 00:24:40.161 nvme9n1: ios=2429/0, merge=0/0, ticks=1227249/0, in_queue=1227249, util=98.77% 00:24:40.161 17:45:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:40.161 [global] 00:24:40.161 thread=1 00:24:40.161 invalidate=1 00:24:40.161 rw=randwrite 00:24:40.161 time_based=1 00:24:40.161 runtime=10 00:24:40.161 ioengine=libaio 00:24:40.161 direct=1 00:24:40.161 bs=262144 00:24:40.161 iodepth=64 00:24:40.161 norandommap=1 00:24:40.161 numjobs=1 00:24:40.161 00:24:40.161 [job0] 00:24:40.161 filename=/dev/nvme0n1 00:24:40.161 [job1] 00:24:40.161 filename=/dev/nvme10n1 00:24:40.161 [job2] 00:24:40.161 filename=/dev/nvme1n1 00:24:40.161 [job3] 00:24:40.161 filename=/dev/nvme2n1 00:24:40.161 [job4] 00:24:40.161 filename=/dev/nvme3n1 00:24:40.161 [job5] 00:24:40.161 filename=/dev/nvme4n1 00:24:40.161 [job6] 00:24:40.161 filename=/dev/nvme5n1 00:24:40.161 [job7] 00:24:40.161 filename=/dev/nvme6n1 00:24:40.161 [job8] 00:24:40.161 filename=/dev/nvme7n1 00:24:40.161 [job9] 00:24:40.161 filename=/dev/nvme8n1 00:24:40.161 [job10] 00:24:40.161 filename=/dev/nvme9n1 00:24:40.161 Could not set queue depth (nvme0n1) 00:24:40.161 Could not set queue depth (nvme10n1) 00:24:40.161 Could not set queue depth (nvme1n1) 00:24:40.161 Could not set queue depth (nvme2n1) 00:24:40.161 Could not set queue depth (nvme3n1) 00:24:40.161 Could not set queue depth (nvme4n1) 00:24:40.161 Could not set queue depth (nvme5n1) 00:24:40.161 Could not set queue depth (nvme6n1) 00:24:40.161 Could not set queue depth (nvme7n1) 00:24:40.161 Could not set queue depth (nvme8n1) 00:24:40.161 Could not set queue depth (nvme9n1) 00:24:40.161 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.161 fio-3.35 00:24:40.161 Starting 11 threads 00:24:50.132 00:24:50.133 job0: (groupid=0, jobs=1): err= 0: pid=104715: Wed Nov 20 17:45:31 2024 00:24:50.133 write: IOPS=366, BW=91.6MiB/s (96.1MB/s)(929MiB/10134msec); 0 zone resets 00:24:50.133 slat (usec): min=22, max=28099, avg=2686.38, stdev=4785.21 00:24:50.133 clat (msec): min=3, max=348, avg=171.81, stdev=40.55 00:24:50.133 lat (msec): min=3, max=348, avg=174.50, stdev=40.86 00:24:50.133 clat percentiles (msec): 00:24:50.133 | 1.00th=[ 91], 5.00th=[ 101], 10.00th=[ 107], 20.00th=[ 165], 00:24:50.133 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:24:50.133 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 209], 95.00th=[ 236], 00:24:50.133 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 351], 00:24:50.133 | 99.99th=[ 351] 00:24:50.133 bw ( KiB/s): min=58368, max=157696, per=10.93%, avg=93491.20, stdev=21701.60, samples=20 00:24:50.133 iops : min= 228, max= 616, avg=365.20, stdev=84.77, samples=20 00:24:50.133 lat (msec) : 4=0.08%, 20=0.08%, 50=0.32%, 100=4.39%, 250=92.65% 00:24:50.133 lat (msec) : 500=2.48% 00:24:50.133 cpu : usr=1.07%, sys=1.06%, ctx=4431, majf=0, minf=1 00:24:50.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:24:50.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.133 issued rwts: total=0,3715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.133 job1: (groupid=0, jobs=1): err= 0: pid=104716: Wed Nov 20 17:45:31 2024 00:24:50.133 write: IOPS=124, BW=31.0MiB/s (32.5MB/s)(320MiB/10307msec); 0 zone resets 00:24:50.133 slat (usec): min=20, max=107102, avg=7130.22, stdev=15161.14 00:24:50.133 clat (msec): min=17, max=859, avg=508.26, stdev=151.55 00:24:50.133 lat (msec): min=17, max=859, avg=515.39, stdev=153.23 00:24:50.133 clat percentiles (msec): 00:24:50.133 | 1.00th=[ 34], 5.00th=[ 103], 10.00th=[ 203], 20.00th=[ 498], 00:24:50.133 | 30.00th=[ 527], 40.00th=[ 550], 50.00th=[ 558], 60.00th=[ 567], 00:24:50.133 | 70.00th=[ 575], 80.00th=[ 584], 90.00th=[ 609], 95.00th=[ 642], 00:24:50.133 | 99.00th=[ 709], 99.50th=[ 743], 99.90th=[ 860], 99.95th=[ 860], 00:24:50.133 | 99.99th=[ 860] 00:24:50.133 bw ( KiB/s): min=18906, max=80896, per=3.64%, avg=31099.40, stdev=12106.53, samples=20 00:24:50.133 iops : min= 73, max= 316, avg=121.40, stdev=47.35, samples=20 00:24:50.133 lat (msec) : 20=0.16%, 50=1.64%, 100=3.05%, 250=6.18%, 500=9.85% 00:24:50.133 lat (msec) : 750=78.66%, 1000=0.47% 00:24:50.133 cpu : usr=0.33%, sys=0.52%, ctx=1454, majf=0, minf=1 00:24:50.133 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:24:50.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.133 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.133 issued rwts: total=0,1279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.133 job2: (groupid=0, jobs=1): err= 0: pid=104728: Wed Nov 20 17:45:31 2024 00:24:50.133 write: IOPS=444, BW=111MiB/s (117MB/s)(1128MiB/10143msec); 0 zone resets 00:24:50.133 slat (usec): min=21, max=18816, avg=2116.01, stdev=4215.78 00:24:50.133 clat (msec): min=3, max=322, avg=141.72, stdev=63.70 00:24:50.133 lat (msec): min=3, max=322, avg=143.84, stdev=64.61 00:24:50.133 clat percentiles (msec): 00:24:50.133 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 50], 20.00th=[ 53], 00:24:50.133 | 30.00th=[ 113], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 178], 00:24:50.133 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 215], 00:24:50.133 | 99.00th=[ 241], 99.50th=[ 255], 99.90th=[ 313], 99.95th=[ 313], 00:24:50.133 | 99.99th=[ 321] 00:24:50.133 bw ( KiB/s): min=67584, max=329728, per=13.31%, avg=113868.80, stdev=64592.45, samples=20 00:24:50.133 iops : min= 264, max= 1288, avg=444.80, stdev=252.31, samples=20 00:24:50.133 lat (msec) : 4=0.07%, 10=0.22%, 20=2.77%, 50=8.29%, 100=16.78% 00:24:50.133 lat (msec) : 250=71.29%, 500=0.58% 00:24:50.133 cpu : usr=1.06%, sys=1.53%, ctx=5709, majf=0, minf=1 00:24:50.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:50.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.133 issued rwts: total=0,4511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.133 job3: (groupid=0, jobs=1): err= 0: pid=104730: Wed Nov 20 17:45:31 2024 00:24:50.133 write: IOPS=373, BW=93.4MiB/s (97.9MB/s)(944MiB/10111msec); 0 zone resets 00:24:50.133 slat (usec): min=21, max=40158, avg=2531.64, stdev=4621.24 00:24:50.133 clat (msec): min=45, max=386, avg=168.71, stdev=36.42 00:24:50.133 lat (msec): min=45, max=386, avg=171.25, stdev=36.74 00:24:50.133 clat percentiles (msec): 00:24:50.133 | 1.00th=[ 86], 5.00th=[ 100], 10.00th=[ 106], 20.00th=[ 163], 00:24:50.133 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:24:50.133 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 201], 95.00th=[ 220], 00:24:50.133 | 99.00th=[ 241], 99.50th=[ 284], 99.90th=[ 372], 99.95th=[ 380], 00:24:50.133 | 99.99th=[ 388] 00:24:50.133 bw ( KiB/s): min=69632, max=159744, per=11.11%, avg=95071.40, stdev=21805.91, samples=20 00:24:50.133 iops : min= 272, max= 624, avg=371.30, stdev=85.08, samples=20 00:24:50.133 lat (msec) : 50=0.13%, 100=5.53%, 250=93.62%, 500=0.71% 00:24:50.133 cpu : usr=0.99%, sys=1.24%, ctx=2778, majf=0, minf=1 00:24:50.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:24:50.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.133 issued rwts: total=0,3777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.133 job4: (groupid=0, jobs=1): err= 0: pid=104731: Wed Nov 20 17:45:31 2024 00:24:50.133 write: IOPS=129, BW=32.4MiB/s (34.0MB/s)(334MiB/10308msec); 0 zone resets 00:24:50.133 slat (usec): min=31, max=78241, avg=6998.72, stdev=13183.20 00:24:50.133 clat (msec): min=5, max=819, avg=485.60, stdev=93.09 00:24:50.133 lat (msec): min=5, max=819, avg=492.59, stdev=93.53 00:24:50.133 clat percentiles (msec): 00:24:50.133 | 1.00th=[ 37], 5.00th=[ 359], 10.00th=[ 451], 20.00th=[ 472], 00:24:50.133 | 30.00th=[ 481], 40.00th=[ 489], 50.00th=[ 498], 60.00th=[ 502], 00:24:50.133 | 70.00th=[ 510], 80.00th=[ 514], 90.00th=[ 558], 95.00th=[ 592], 00:24:50.133 | 99.00th=[ 726], 99.50th=[ 760], 99.90th=[ 818], 99.95th=[ 818], 00:24:50.133 | 99.99th=[ 818] 00:24:50.133 bw ( KiB/s): min=26624, max=40448, per=3.81%, avg=32614.40, stdev=2673.88, samples=20 00:24:50.133 iops : min= 104, max= 158, avg=127.40, stdev=10.44, samples=20 00:24:50.133 lat (msec) : 10=0.07%, 20=0.37%, 50=1.12%, 100=0.97%, 250=0.45% 00:24:50.133 lat (msec) : 500=54.23%, 750=42.03%, 1000=0.75% 00:24:50.133 cpu : usr=0.39%, sys=0.54%, ctx=1735, majf=0, minf=1 00:24:50.133 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:24:50.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.133 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.133 issued rwts: total=0,1337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.133 job5: (groupid=0, jobs=1): err= 0: pid=104732: Wed Nov 20 17:45:31 2024 00:24:50.133 write: IOPS=1326, BW=332MiB/s (348MB/s)(3332MiB/10045msec); 0 zone resets 00:24:50.133 slat (usec): min=18, max=12563, avg=683.67, stdev=1198.81 00:24:50.133 clat (usec): min=990, max=559235, avg=47524.24, stdev=26669.19 00:24:50.133 lat (usec): min=1082, max=559299, avg=48207.91, stdev=26674.18 00:24:50.133 clat percentiles (msec): 00:24:50.133 | 1.00th=[ 13], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 44], 00:24:50.133 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:24:50.133 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 50], 95.00th=[ 58], 00:24:50.133 | 99.00th=[ 118], 99.50th=[ 218], 99.90th=[ 493], 99.95th=[ 527], 00:24:50.133 | 99.99th=[ 550] 00:24:50.133 bw ( KiB/s): min=188416, max=370176, per=39.69%, avg=339558.95, stdev=44580.95, samples=20 00:24:50.133 iops : min= 736, max= 1446, avg=1326.40, stdev=174.15, samples=20 00:24:50.133 lat (usec) : 1000=0.01% 00:24:50.133 lat (msec) : 2=0.07%, 4=0.05%, 10=0.43%, 20=1.55%, 50=89.02% 00:24:50.133 lat (msec) : 100=7.14%, 250=1.33%, 500=0.32%, 750=0.09% 00:24:50.133 cpu : usr=2.70%, sys=3.54%, ctx=17169, majf=0, minf=1 00:24:50.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:50.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.133 issued rwts: total=0,13328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.133 job6: (groupid=0, jobs=1): err= 0: pid=104733: Wed Nov 20 17:45:31 2024 00:24:50.133 write: IOPS=113, BW=28.5MiB/s (29.9MB/s)(293MiB/10291msec); 0 zone resets 00:24:50.133 slat (usec): min=29, max=229726, avg=8251.90, stdev=16912.87 00:24:50.133 clat (msec): min=120, max=852, avg=553.33, stdev=108.67 00:24:50.133 lat (msec): min=120, max=852, avg=561.58, stdev=108.79 00:24:50.133 clat percentiles (msec): 00:24:50.133 | 1.00th=[ 163], 5.00th=[ 409], 10.00th=[ 468], 20.00th=[ 493], 00:24:50.133 | 30.00th=[ 510], 40.00th=[ 518], 50.00th=[ 542], 60.00th=[ 558], 00:24:50.133 | 70.00th=[ 592], 80.00th=[ 625], 90.00th=[ 701], 95.00th=[ 760], 00:24:50.133 | 99.00th=[ 810], 99.50th=[ 818], 99.90th=[ 852], 99.95th=[ 852], 00:24:50.133 | 99.99th=[ 852] 00:24:50.133 bw ( KiB/s): min=15329, max=34816, per=3.32%, avg=28363.25, stdev=4977.73, samples=20 00:24:50.133 iops : min= 59, max= 136, avg=110.75, stdev=19.57, samples=20 00:24:50.133 lat (msec) : 250=2.05%, 500=23.72%, 750=68.60%, 1000=5.63% 00:24:50.133 cpu : usr=0.43%, sys=0.28%, ctx=1552, majf=0, minf=1 00:24:50.133 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:24:50.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.133 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.133 issued rwts: total=0,1172,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.134 job7: (groupid=0, jobs=1): err= 0: pid=104734: Wed Nov 20 17:45:31 2024 00:24:50.134 write: IOPS=132, BW=33.1MiB/s (34.7MB/s)(341MiB/10303msec); 0 zone resets 00:24:50.134 slat (usec): min=27, max=49111, avg=6901.12, stdev=13055.89 00:24:50.134 clat (usec): min=1629, max=811019, avg=476101.30, stdev=105142.30 00:24:50.134 lat (usec): min=1704, max=811078, avg=483002.41, stdev=106132.05 00:24:50.134 clat percentiles (msec): 00:24:50.134 | 1.00th=[ 80], 5.00th=[ 161], 10.00th=[ 443], 20.00th=[ 468], 00:24:50.134 | 30.00th=[ 481], 40.00th=[ 489], 50.00th=[ 498], 60.00th=[ 502], 00:24:50.134 | 70.00th=[ 510], 80.00th=[ 514], 90.00th=[ 542], 95.00th=[ 575], 00:24:50.134 | 99.00th=[ 718], 99.50th=[ 751], 99.90th=[ 810], 99.95th=[ 810], 00:24:50.134 | 99.99th=[ 810] 00:24:50.134 bw ( KiB/s): min=26624, max=55296, per=3.89%, avg=33280.35, stdev=5635.45, samples=20 00:24:50.134 iops : min= 104, max= 216, avg=129.95, stdev=22.05, samples=20 00:24:50.134 lat (msec) : 2=0.07%, 10=0.07%, 20=0.15%, 50=0.44%, 100=1.76% 00:24:50.134 lat (msec) : 250=3.37%, 500=51.61%, 750=42.08%, 1000=0.44% 00:24:50.134 cpu : usr=0.40%, sys=0.52%, ctx=1742, majf=0, minf=1 00:24:50.134 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:24:50.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.134 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.134 issued rwts: total=0,1364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.134 job8: (groupid=0, jobs=1): err= 0: pid=104735: Wed Nov 20 17:45:31 2024 00:24:50.134 write: IOPS=115, BW=29.0MiB/s (30.4MB/s)(299MiB/10311msec); 0 zone resets 00:24:50.134 slat (usec): min=31, max=239747, avg=8370.97, stdev=17261.94 00:24:50.134 clat (msec): min=5, max=887, avg=542.95, stdev=105.01 00:24:50.134 lat (msec): min=5, max=887, avg=551.32, stdev=105.40 00:24:50.134 clat percentiles (msec): 00:24:50.134 | 1.00th=[ 69], 5.00th=[ 460], 10.00th=[ 481], 20.00th=[ 502], 00:24:50.134 | 30.00th=[ 518], 40.00th=[ 531], 50.00th=[ 542], 60.00th=[ 558], 00:24:50.134 | 70.00th=[ 567], 80.00th=[ 584], 90.00th=[ 659], 95.00th=[ 693], 00:24:50.134 | 99.00th=[ 802], 99.50th=[ 835], 99.90th=[ 885], 99.95th=[ 885], 00:24:50.134 | 99.99th=[ 885] 00:24:50.134 bw ( KiB/s): min=24064, max=34816, per=3.39%, avg=28976.50, stdev=2961.74, samples=20 00:24:50.134 iops : min= 94, max= 136, avg=113.15, stdev=11.60, samples=20 00:24:50.134 lat (msec) : 10=0.17%, 20=0.08%, 50=0.33%, 100=1.00%, 250=1.34% 00:24:50.134 lat (msec) : 500=15.47%, 750=79.35%, 1000=2.26% 00:24:50.134 cpu : usr=0.41%, sys=0.41%, ctx=1595, majf=0, minf=1 00:24:50.134 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:24:50.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.134 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.134 issued rwts: total=0,1196,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.134 job9: (groupid=0, jobs=1): err= 0: pid=104736: Wed Nov 20 17:45:31 2024 00:24:50.134 write: IOPS=139, BW=34.8MiB/s (36.4MB/s)(358MiB/10300msec); 0 zone resets 00:24:50.134 slat (usec): min=24, max=111138, avg=6617.08, stdev=13198.62 00:24:50.134 clat (msec): min=7, max=828, avg=453.31, stdev=139.11 00:24:50.134 lat (msec): min=7, max=828, avg=459.93, stdev=141.09 00:24:50.134 clat percentiles (msec): 00:24:50.134 | 1.00th=[ 23], 5.00th=[ 74], 10.00th=[ 215], 20.00th=[ 456], 00:24:50.134 | 30.00th=[ 477], 40.00th=[ 489], 50.00th=[ 498], 60.00th=[ 502], 00:24:50.134 | 70.00th=[ 510], 80.00th=[ 518], 90.00th=[ 542], 95.00th=[ 575], 00:24:50.134 | 99.00th=[ 701], 99.50th=[ 768], 99.90th=[ 827], 99.95th=[ 827], 00:24:50.134 | 99.99th=[ 827] 00:24:50.134 bw ( KiB/s): min=28672, max=91648, per=4.09%, avg=35014.35, stdev=13500.21, samples=20 00:24:50.134 iops : min= 112, max= 358, avg=136.70, stdev=52.75, samples=20 00:24:50.134 lat (msec) : 10=0.28%, 20=0.63%, 50=2.30%, 100=2.86%, 250=4.96% 00:24:50.134 lat (msec) : 500=45.74%, 750=42.53%, 1000=0.70% 00:24:50.134 cpu : usr=0.42%, sys=0.54%, ctx=1506, majf=0, minf=1 00:24:50.134 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:24:50.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.134 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.134 issued rwts: total=0,1432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.134 job10: (groupid=0, jobs=1): err= 0: pid=104737: Wed Nov 20 17:45:31 2024 00:24:50.134 write: IOPS=130, BW=32.7MiB/s (34.3MB/s)(337MiB/10308msec); 0 zone resets 00:24:50.134 slat (usec): min=29, max=231903, avg=6907.45, stdev=17999.54 00:24:50.134 clat (msec): min=8, max=869, avg=482.03, stdev=235.69 00:24:50.134 lat (msec): min=8, max=869, avg=488.94, stdev=238.62 00:24:50.134 clat percentiles (msec): 00:24:50.134 | 1.00th=[ 22], 5.00th=[ 45], 10.00th=[ 63], 20.00th=[ 70], 00:24:50.134 | 30.00th=[ 510], 40.00th=[ 558], 50.00th=[ 567], 60.00th=[ 600], 00:24:50.134 | 70.00th=[ 617], 80.00th=[ 651], 90.00th=[ 701], 95.00th=[ 726], 00:24:50.134 | 99.00th=[ 776], 99.50th=[ 810], 99.90th=[ 869], 99.95th=[ 869], 00:24:50.134 | 99.99th=[ 869] 00:24:50.134 bw ( KiB/s): min=18432, max=151040, per=3.84%, avg=32890.75, stdev=28041.27, samples=20 00:24:50.134 iops : min= 72, max= 590, avg=128.40, stdev=109.56, samples=20 00:24:50.134 lat (msec) : 10=0.37%, 20=0.59%, 50=4.90%, 100=15.21%, 250=0.82% 00:24:50.134 lat (msec) : 500=6.75%, 750=70.18%, 1000=1.19% 00:24:50.134 cpu : usr=0.35%, sys=0.53%, ctx=1804, majf=0, minf=1 00:24:50.134 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:24:50.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.134 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.134 issued rwts: total=0,1348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.134 00:24:50.134 Run status group 0 (all jobs): 00:24:50.134 WRITE: bw=835MiB/s (876MB/s), 28.5MiB/s-332MiB/s (29.9MB/s-348MB/s), io=8615MiB (9033MB), run=10045-10311msec 00:24:50.134 00:24:50.134 Disk stats (read/write): 00:24:50.134 nvme0n1: ios=50/7251, merge=0/0, ticks=56/1202800, in_queue=1202856, util=97.28% 00:24:50.134 nvme10n1: ios=49/2481, merge=0/0, ticks=61/1218678, in_queue=1218739, util=97.42% 00:24:50.134 nvme1n1: ios=32/8855, merge=0/0, ticks=29/1206687, in_queue=1206716, util=97.52% 00:24:50.134 nvme2n1: ios=22/7358, merge=0/0, ticks=50/1201119, in_queue=1201169, util=97.26% 00:24:50.134 nvme3n1: ios=0/2602, merge=0/0, ticks=0/1219400, in_queue=1219400, util=97.43% 00:24:50.134 nvme4n1: ios=0/26313, merge=0/0, ticks=0/1212076, in_queue=1212076, util=97.69% 00:24:50.134 nvme5n1: ios=0/2265, merge=0/0, ticks=0/1212926, in_queue=1212926, util=97.76% 00:24:50.134 nvme6n1: ios=0/2653, merge=0/0, ticks=0/1219395, in_queue=1219395, util=98.02% 00:24:50.134 nvme7n1: ios=0/2315, merge=0/0, ticks=0/1211805, in_queue=1211805, util=98.55% 00:24:50.134 nvme8n1: ios=0/2786, merge=0/0, ticks=0/1218561, in_queue=1218561, util=98.57% 00:24:50.134 nvme9n1: ios=0/2617, merge=0/0, ticks=0/1212155, in_queue=1212155, util=98.74% 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:50.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:50.134 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:50.134 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:50.135 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:50.135 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:50.135 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:50.135 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:50.135 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.135 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:50.396 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.396 17:45:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:50.396 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:50.396 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.396 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:50.656 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.656 rmmod nvme_tcp 00:24:50.656 rmmod nvme_fabrics 00:24:50.656 rmmod nvme_keyring 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 104037 ']' 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 104037 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 104037 ']' 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 104037 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104037 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.656 killing process with pid 104037 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104037' 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 104037 00:24:50.656 17:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 104037 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:24:51.592 00:24:51.592 real 0m50.826s 00:24:51.592 user 3m3.227s 00:24:51.592 sys 0m15.684s 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.592 ************************************ 00:24:51.592 END TEST nvmf_multiconnection 00:24:51.592 ************************************ 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:51.592 17:45:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:51.852 ************************************ 00:24:51.852 START TEST nvmf_initiator_timeout 00:24:51.852 ************************************ 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:51.852 * Looking for test storage... 00:24:51.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:51.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.852 --rc genhtml_branch_coverage=1 00:24:51.852 --rc genhtml_function_coverage=1 00:24:51.852 --rc genhtml_legend=1 00:24:51.852 --rc geninfo_all_blocks=1 00:24:51.852 --rc geninfo_unexecuted_blocks=1 00:24:51.852 00:24:51.852 ' 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:51.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.852 --rc genhtml_branch_coverage=1 00:24:51.852 --rc genhtml_function_coverage=1 00:24:51.852 --rc genhtml_legend=1 00:24:51.852 --rc geninfo_all_blocks=1 00:24:51.852 --rc geninfo_unexecuted_blocks=1 00:24:51.852 00:24:51.852 ' 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:51.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.852 --rc genhtml_branch_coverage=1 00:24:51.852 --rc genhtml_function_coverage=1 00:24:51.852 --rc genhtml_legend=1 00:24:51.852 --rc geninfo_all_blocks=1 00:24:51.852 --rc geninfo_unexecuted_blocks=1 00:24:51.852 00:24:51.852 ' 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:51.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.852 --rc genhtml_branch_coverage=1 00:24:51.852 --rc genhtml_function_coverage=1 00:24:51.852 --rc genhtml_legend=1 00:24:51.852 --rc geninfo_all_blocks=1 00:24:51.852 --rc geninfo_unexecuted_blocks=1 00:24:51.852 00:24:51.852 ' 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.852 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.853 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.853 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:51.854 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:52.112 Cannot find device "nvmf_init_br" 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:52.112 Cannot find device "nvmf_init_br2" 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:52.112 Cannot find device "nvmf_tgt_br" 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:52.112 Cannot find device "nvmf_tgt_br2" 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:52.112 Cannot find device "nvmf_init_br" 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:52.112 Cannot find device "nvmf_init_br2" 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:24:52.112 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:52.112 Cannot find device "nvmf_tgt_br" 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:52.113 Cannot find device "nvmf_tgt_br2" 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:52.113 Cannot find device "nvmf_br" 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:52.113 Cannot find device "nvmf_init_if" 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:52.113 Cannot find device "nvmf_init_if2" 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:52.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:52.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:52.113 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:52.380 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:52.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:52.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:24:52.380 00:24:52.380 --- 10.0.0.3 ping statistics --- 00:24:52.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.380 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:24:52.380 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:52.380 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:52.380 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:24:52.380 00:24:52.380 --- 10.0.0.4 ping statistics --- 00:24:52.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.380 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:52.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:24:52.381 00:24:52.381 --- 10.0.0.1 ping statistics --- 00:24:52.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.381 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:52.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:24:52.381 00:24:52.381 --- 10.0.0.2 ping statistics --- 00:24:52.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.381 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=105157 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 105157 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 105157 ']' 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.381 17:45:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:52.381 [2024-11-20 17:45:34.156913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:52.381 [2024-11-20 17:45:34.157738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.642 [2024-11-20 17:45:34.303650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:52.901 [2024-11-20 17:45:34.438567] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.901 [2024-11-20 17:45:34.438644] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.901 [2024-11-20 17:45:34.438668] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.901 [2024-11-20 17:45:34.438686] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.901 [2024-11-20 17:45:34.438700] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.901 [2024-11-20 17:45:34.438897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.901 [2024-11-20 17:45:34.439648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.901 [2024-11-20 17:45:34.439734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.901 [2024-11-20 17:45:34.439736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.468 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.468 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:53.468 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:53.468 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:53.468 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.726 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.726 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.727 Malloc0 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.727 Delay0 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.727 [2024-11-20 17:45:35.319475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:53.727 [2024-11-20 17:45:35.348095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.727 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:24:53.985 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:53.985 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:53.985 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.985 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:53.985 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=105240 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:55.887 17:45:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:55.887 [global] 00:24:55.887 thread=1 00:24:55.887 invalidate=1 00:24:55.887 rw=write 00:24:55.887 time_based=1 00:24:55.887 runtime=60 00:24:55.887 ioengine=libaio 00:24:55.887 direct=1 00:24:55.887 bs=4096 00:24:55.887 iodepth=1 00:24:55.887 norandommap=0 00:24:55.887 numjobs=1 00:24:55.887 00:24:55.887 verify_dump=1 00:24:55.887 verify_backlog=512 00:24:55.887 verify_state_save=0 00:24:55.887 do_verify=1 00:24:55.887 verify=crc32c-intel 00:24:55.887 [job0] 00:24:55.887 filename=/dev/nvme0n1 00:24:55.887 Could not set queue depth (nvme0n1) 00:24:56.146 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:56.146 fio-3.35 00:24:56.146 Starting 1 thread 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:59.433 true 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:59.433 true 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:59.433 true 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:59.433 true 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.433 17:45:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:01.962 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:01.962 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.962 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.962 true 00:25:01.962 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.963 true 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.963 true 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.963 true 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:01.963 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 105240 00:25:58.223 00:25:58.223 job0: (groupid=0, jobs=1): err= 0: pid=105265: Wed Nov 20 17:46:37 2024 00:25:58.223 read: IOPS=768, BW=3072KiB/s (3146kB/s)(180MiB/60000msec) 00:25:58.223 slat (usec): min=13, max=103, avg=18.02, stdev= 4.27 00:25:58.223 clat (usec): min=169, max=2250, avg=212.19, stdev=24.76 00:25:58.223 lat (usec): min=185, max=2267, avg=230.20, stdev=25.53 00:25:58.223 clat percentiles (usec): 00:25:58.223 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:25:58.223 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 212], 00:25:58.223 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 245], 00:25:58.223 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 343], 99.95th=[ 424], 00:25:58.223 | 99.99th=[ 824] 00:25:58.223 write: IOPS=768, BW=3075KiB/s (3149kB/s)(180MiB/60000msec); 0 zone resets 00:25:58.223 slat (usec): min=17, max=11331, avg=25.61, stdev=65.17 00:25:58.223 clat (usec): min=15, max=40400k, avg=1041.43, stdev=188106.53 00:25:58.223 lat (usec): min=152, max=40400k, avg=1067.04, stdev=188106.52 00:25:58.223 clat percentiles (usec): 00:25:58.223 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:25:58.223 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:25:58.223 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:25:58.223 | 99.00th=[ 217], 99.50th=[ 235], 99.90th=[ 392], 99.95th=[ 586], 00:25:58.223 | 99.99th=[ 1270] 00:25:58.224 bw ( KiB/s): min= 4640, max=11528, per=100.00%, avg=9244.18, stdev=1337.38, samples=39 00:25:58.224 iops : min= 1160, max= 2882, avg=2311.03, stdev=334.36, samples=39 00:25:58.224 lat (usec) : 20=0.01%, 100=0.01%, 250=98.02%, 500=1.93%, 750=0.03% 00:25:58.224 lat (usec) : 1000=0.01% 00:25:58.224 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:25:58.224 cpu : usr=0.63%, sys=2.45%, ctx=92270, majf=0, minf=5 00:25:58.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.224 issued rwts: total=46080,46126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:58.224 00:25:58.224 Run status group 0 (all jobs): 00:25:58.224 READ: bw=3072KiB/s (3146kB/s), 3072KiB/s-3072KiB/s (3146kB/s-3146kB/s), io=180MiB (189MB), run=60000-60000msec 00:25:58.224 WRITE: bw=3075KiB/s (3149kB/s), 3075KiB/s-3075KiB/s (3149kB/s-3149kB/s), io=180MiB (189MB), run=60000-60000msec 00:25:58.224 00:25:58.224 Disk stats (read/write): 00:25:58.224 nvme0n1: ios=45894/46080, merge=0/0, ticks=10056/8061, in_queue=18117, util=99.94% 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:58.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:58.224 nvmf hotplug test: fio successful as expected 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:58.224 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:58.224 rmmod nvme_tcp 00:25:58.224 rmmod nvme_fabrics 00:25:58.224 rmmod nvme_keyring 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 105157 ']' 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 105157 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 105157 ']' 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 105157 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105157 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.224 killing process with pid 105157 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105157' 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 105157 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 105157 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:25:58.224 00:25:58.224 real 1m5.165s 00:25:58.224 user 4m7.115s 00:25:58.224 sys 0m8.788s 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:58.224 ************************************ 00:25:58.224 END TEST nvmf_initiator_timeout 00:25:58.224 ************************************ 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:58.224 00:25:58.224 real 14m31.760s 00:25:58.224 user 44m8.476s 00:25:58.224 sys 2m24.354s 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.224 17:46:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:58.224 ************************************ 00:25:58.224 END TEST nvmf_target_extra 00:25:58.224 ************************************ 00:25:58.224 17:46:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:58.224 17:46:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:58.224 17:46:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.225 17:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:58.225 ************************************ 00:25:58.225 START TEST nvmf_host 00:25:58.225 ************************************ 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:58.225 * Looking for test storage... 00:25:58.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:58.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.225 --rc genhtml_branch_coverage=1 00:25:58.225 --rc genhtml_function_coverage=1 00:25:58.225 --rc genhtml_legend=1 00:25:58.225 --rc geninfo_all_blocks=1 00:25:58.225 --rc geninfo_unexecuted_blocks=1 00:25:58.225 00:25:58.225 ' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:58.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.225 --rc genhtml_branch_coverage=1 00:25:58.225 --rc genhtml_function_coverage=1 00:25:58.225 --rc genhtml_legend=1 00:25:58.225 --rc geninfo_all_blocks=1 00:25:58.225 --rc geninfo_unexecuted_blocks=1 00:25:58.225 00:25:58.225 ' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:58.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.225 --rc genhtml_branch_coverage=1 00:25:58.225 --rc genhtml_function_coverage=1 00:25:58.225 --rc genhtml_legend=1 00:25:58.225 --rc geninfo_all_blocks=1 00:25:58.225 --rc geninfo_unexecuted_blocks=1 00:25:58.225 00:25:58.225 ' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:58.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.225 --rc genhtml_branch_coverage=1 00:25:58.225 --rc genhtml_function_coverage=1 00:25:58.225 --rc genhtml_legend=1 00:25:58.225 --rc geninfo_all_blocks=1 00:25:58.225 --rc geninfo_unexecuted_blocks=1 00:25:58.225 00:25:58.225 ' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:58.225 17:46:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:58.226 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.226 ************************************ 00:25:58.226 START TEST nvmf_multicontroller 00:25:58.226 ************************************ 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:58.226 * Looking for test storage... 00:25:58.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:25:58.226 17:46:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:58.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.226 --rc genhtml_branch_coverage=1 00:25:58.226 --rc genhtml_function_coverage=1 00:25:58.226 --rc genhtml_legend=1 00:25:58.226 --rc geninfo_all_blocks=1 00:25:58.226 --rc geninfo_unexecuted_blocks=1 00:25:58.226 00:25:58.226 ' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:58.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.226 --rc genhtml_branch_coverage=1 00:25:58.226 --rc genhtml_function_coverage=1 00:25:58.226 --rc genhtml_legend=1 00:25:58.226 --rc geninfo_all_blocks=1 00:25:58.226 --rc geninfo_unexecuted_blocks=1 00:25:58.226 00:25:58.226 ' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:58.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.226 --rc genhtml_branch_coverage=1 00:25:58.226 --rc genhtml_function_coverage=1 00:25:58.226 --rc genhtml_legend=1 00:25:58.226 --rc geninfo_all_blocks=1 00:25:58.226 --rc geninfo_unexecuted_blocks=1 00:25:58.226 00:25:58.226 ' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:58.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.226 --rc genhtml_branch_coverage=1 00:25:58.226 --rc genhtml_function_coverage=1 00:25:58.226 --rc genhtml_legend=1 00:25:58.226 --rc geninfo_all_blocks=1 00:25:58.226 --rc geninfo_unexecuted_blocks=1 00:25:58.226 00:25:58.226 ' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.226 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:58.227 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:58.227 Cannot find device "nvmf_init_br" 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:58.227 Cannot find device "nvmf_init_br2" 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:58.227 Cannot find device "nvmf_tgt_br" 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:58.227 Cannot find device "nvmf_tgt_br2" 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:58.227 Cannot find device "nvmf_init_br" 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:58.227 Cannot find device "nvmf_init_br2" 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:58.227 Cannot find device "nvmf_tgt_br" 00:25:58.227 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:58.228 Cannot find device "nvmf_tgt_br2" 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:58.228 Cannot find device "nvmf_br" 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:58.228 Cannot find device "nvmf_init_if" 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:58.228 Cannot find device "nvmf_init_if2" 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:58.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:58.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:58.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:58.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:25:58.228 00:25:58.228 --- 10.0.0.3 ping statistics --- 00:25:58.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.228 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:58.228 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:58.228 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:25:58.228 00:25:58.228 --- 10.0.0.4 ping statistics --- 00:25:58.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.228 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:58.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:58.228 00:25:58.228 --- 10.0.0.1 ping statistics --- 00:25:58.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.228 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:58.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:25:58.228 00:25:58.228 --- 10.0.0.2 ping statistics --- 00:25:58.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.228 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@457 -- # return 0 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=106170 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 106170 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 106170 ']' 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.228 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.229 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.229 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.229 17:46:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.229 [2024-11-20 17:46:39.599354] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:58.229 [2024-11-20 17:46:39.599482] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.229 [2024-11-20 17:46:39.744311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:58.229 [2024-11-20 17:46:39.881131] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.229 [2024-11-20 17:46:39.881227] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.229 [2024-11-20 17:46:39.881242] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.229 [2024-11-20 17:46:39.881253] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.229 [2024-11-20 17:46:39.881262] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.229 [2024-11-20 17:46:39.881470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.229 [2024-11-20 17:46:39.881626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.229 [2024-11-20 17:46:39.881632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 [2024-11-20 17:46:40.696249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 Malloc0 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 [2024-11-20 17:46:40.769367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 [2024-11-20 17:46:40.781250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 Malloc1 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.164 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=106228 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 106228 /var/tmp/bdevperf.sock 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 106228 ']' 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.165 17:46:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.485 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.486 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:59.486 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:59.486 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.486 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 NVMe0n1 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.745 1 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 2024/11/20 17:46:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:59.745 request: 00:25:59.745 { 00:25:59.745 "method": "bdev_nvme_attach_controller", 00:25:59.745 "params": { 00:25:59.745 "name": "NVMe0", 00:25:59.745 "trtype": "tcp", 00:25:59.745 "traddr": "10.0.0.3", 00:25:59.745 "adrfam": "ipv4", 00:25:59.745 "trsvcid": "4420", 00:25:59.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.745 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:59.745 "hostaddr": "10.0.0.1", 00:25:59.745 "prchk_reftag": false, 00:25:59.745 "prchk_guard": false, 00:25:59.745 "hdgst": false, 00:25:59.745 "ddgst": false, 00:25:59.745 "allow_unrecognized_csi": false 00:25:59.745 } 00:25:59.745 } 00:25:59.745 Got JSON-RPC error response 00:25:59.745 GoRPCClient: error on JSON-RPC call 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 2024/11/20 17:46:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:59.745 request: 00:25:59.745 { 00:25:59.745 "method": "bdev_nvme_attach_controller", 00:25:59.745 "params": { 00:25:59.745 "name": "NVMe0", 00:25:59.745 "trtype": "tcp", 00:25:59.745 "traddr": "10.0.0.3", 00:25:59.745 "adrfam": "ipv4", 00:25:59.745 "trsvcid": "4420", 00:25:59.745 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:59.745 "hostaddr": "10.0.0.1", 00:25:59.745 "prchk_reftag": false, 00:25:59.745 "prchk_guard": false, 00:25:59.745 "hdgst": false, 00:25:59.745 "ddgst": false, 00:25:59.745 "allow_unrecognized_csi": false 00:25:59.745 } 00:25:59.745 } 00:25:59.745 Got JSON-RPC error response 00:25:59.745 GoRPCClient: error on JSON-RPC call 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.745 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 2024/11/20 17:46:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:25:59.745 request: 00:25:59.745 { 00:25:59.745 "method": "bdev_nvme_attach_controller", 00:25:59.745 "params": { 00:25:59.746 "name": "NVMe0", 00:25:59.746 "trtype": "tcp", 00:25:59.746 "traddr": "10.0.0.3", 00:25:59.746 "adrfam": "ipv4", 00:25:59.746 "trsvcid": "4420", 00:25:59.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.746 "hostaddr": "10.0.0.1", 00:25:59.746 "prchk_reftag": false, 00:25:59.746 "prchk_guard": false, 00:25:59.746 "hdgst": false, 00:25:59.746 "ddgst": false, 00:25:59.746 "multipath": "disable", 00:25:59.746 "allow_unrecognized_csi": false 00:25:59.746 } 00:25:59.746 } 00:25:59.746 Got JSON-RPC error response 00:25:59.746 GoRPCClient: error on JSON-RPC call 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.746 2024/11/20 17:46:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:59.746 request: 00:25:59.746 { 00:25:59.746 "method": "bdev_nvme_attach_controller", 00:25:59.746 "params": { 00:25:59.746 "name": "NVMe0", 00:25:59.746 "trtype": "tcp", 00:25:59.746 "traddr": "10.0.0.3", 00:25:59.746 "adrfam": "ipv4", 00:25:59.746 "trsvcid": "4420", 00:25:59.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.746 "hostaddr": "10.0.0.1", 00:25:59.746 "prchk_reftag": false, 00:25:59.746 "prchk_guard": false, 00:25:59.746 "hdgst": false, 00:25:59.746 "ddgst": false, 00:25:59.746 "multipath": "failover", 00:25:59.746 "allow_unrecognized_csi": false 00:25:59.746 } 00:25:59.746 } 00:25:59.746 Got JSON-RPC error response 00:25:59.746 GoRPCClient: error on JSON-RPC call 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.746 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.746 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.005 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:00.005 17:46:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.381 { 00:26:01.381 "results": [ 00:26:01.381 { 00:26:01.381 "job": "NVMe0n1", 00:26:01.381 "core_mask": "0x1", 00:26:01.381 "workload": "write", 00:26:01.381 "status": "finished", 00:26:01.381 "queue_depth": 128, 00:26:01.381 "io_size": 4096, 00:26:01.381 "runtime": 1.008385, 00:26:01.381 "iops": 18642.681118818706, 00:26:01.381 "mibps": 72.82297312038557, 00:26:01.381 "io_failed": 0, 00:26:01.381 "io_timeout": 0, 00:26:01.381 "avg_latency_us": 6846.462940291795, 00:26:01.381 "min_latency_us": 3991.7381818181816, 00:26:01.381 "max_latency_us": 12868.887272727272 00:26:01.381 } 00:26:01.381 ], 00:26:01.381 "core_count": 1 00:26:01.381 } 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.381 nvme1n1 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.381 nvme1n1 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.381 17:46:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 106228 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 106228 ']' 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 106228 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106228 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:01.381 killing process with pid 106228 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106228' 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 106228 00:26:01.381 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 106228 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:26:01.641 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:01.641 [2024-11-20 17:46:40.913793] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:01.641 [2024-11-20 17:46:40.913920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106228 ] 00:26:01.641 [2024-11-20 17:46:41.056574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.641 [2024-11-20 17:46:41.138285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.641 [2024-11-20 17:46:41.566272] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 3f5ec759-9277-4888-8d28-de31ab83b90b already exists 00:26:01.641 [2024-11-20 17:46:41.566349] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:3f5ec759-9277-4888-8d28-de31ab83b90b alias for bdev NVMe1n1 00:26:01.641 [2024-11-20 17:46:41.566367] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:01.641 Running I/O for 1 seconds... 00:26:01.641 18606.00 IOPS, 72.68 MiB/s 00:26:01.641 Latency(us) 00:26:01.641 [2024-11-20T17:46:43.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.641 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:01.641 NVMe0n1 : 1.01 18642.68 72.82 0.00 0.00 6846.46 3991.74 12868.89 00:26:01.641 [2024-11-20T17:46:43.429Z] =================================================================================================================== 00:26:01.641 [2024-11-20T17:46:43.429Z] Total : 18642.68 72.82 0.00 0.00 6846.46 3991.74 12868.89 00:26:01.641 Received shutdown signal, test time was about 1.000000 seconds 00:26:01.641 00:26:01.641 Latency(us) 00:26:01.641 [2024-11-20T17:46:43.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.641 [2024-11-20T17:46:43.429Z] =================================================================================================================== 00:26:01.641 [2024-11-20T17:46:43.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.641 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.641 rmmod nvme_tcp 00:26:01.641 rmmod nvme_fabrics 00:26:01.641 rmmod nvme_keyring 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 106170 ']' 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 106170 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 106170 ']' 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 106170 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:01.641 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:01.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106170 00:26:01.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:01.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:01.900 killing process with pid 106170 00:26:01.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106170' 00:26:01.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 106170 00:26:01.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 106170 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:02.159 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:02.418 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:02.418 17:46:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:26:02.418 00:26:02.418 real 0m5.232s 00:26:02.418 user 0m14.695s 00:26:02.418 sys 0m1.324s 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:02.418 ************************************ 00:26:02.418 END TEST nvmf_multicontroller 00:26:02.418 ************************************ 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.418 ************************************ 00:26:02.418 START TEST nvmf_aer 00:26:02.418 ************************************ 00:26:02.418 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:02.678 * Looking for test storage... 00:26:02.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:02.678 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.679 --rc genhtml_branch_coverage=1 00:26:02.679 --rc genhtml_function_coverage=1 00:26:02.679 --rc genhtml_legend=1 00:26:02.679 --rc geninfo_all_blocks=1 00:26:02.679 --rc geninfo_unexecuted_blocks=1 00:26:02.679 00:26:02.679 ' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.679 --rc genhtml_branch_coverage=1 00:26:02.679 --rc genhtml_function_coverage=1 00:26:02.679 --rc genhtml_legend=1 00:26:02.679 --rc geninfo_all_blocks=1 00:26:02.679 --rc geninfo_unexecuted_blocks=1 00:26:02.679 00:26:02.679 ' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.679 --rc genhtml_branch_coverage=1 00:26:02.679 --rc genhtml_function_coverage=1 00:26:02.679 --rc genhtml_legend=1 00:26:02.679 --rc geninfo_all_blocks=1 00:26:02.679 --rc geninfo_unexecuted_blocks=1 00:26:02.679 00:26:02.679 ' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.679 --rc genhtml_branch_coverage=1 00:26:02.679 --rc genhtml_function_coverage=1 00:26:02.679 --rc genhtml_legend=1 00:26:02.679 --rc geninfo_all_blocks=1 00:26:02.679 --rc geninfo_unexecuted_blocks=1 00:26:02.679 00:26:02.679 ' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # nvmf_veth_init 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.679 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:02.680 Cannot find device "nvmf_init_br" 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:02.680 Cannot find device "nvmf_init_br2" 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:02.680 Cannot find device "nvmf_tgt_br" 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:02.680 Cannot find device "nvmf_tgt_br2" 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:02.680 Cannot find device "nvmf_init_br" 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:02.680 Cannot find device "nvmf_init_br2" 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:02.680 Cannot find device "nvmf_tgt_br" 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:26:02.680 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:02.938 Cannot find device "nvmf_tgt_br2" 00:26:02.938 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:02.939 Cannot find device "nvmf_br" 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:02.939 Cannot find device "nvmf_init_if" 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:02.939 Cannot find device "nvmf_init_if2" 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:02.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:02.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:02.939 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:03.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:03.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:26:03.198 00:26:03.198 --- 10.0.0.3 ping statistics --- 00:26:03.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.198 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:03.198 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:03.198 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:26:03.198 00:26:03.198 --- 10.0.0.4 ping statistics --- 00:26:03.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.198 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:03.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:26:03.198 00:26:03.198 --- 10.0.0.1 ping statistics --- 00:26:03.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.198 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:03.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:26:03.198 00:26:03.198 --- 10.0.0.2 ping statistics --- 00:26:03.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.198 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # return 0 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=106519 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 106519 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 106519 ']' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:03.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:03.198 17:46:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.198 [2024-11-20 17:46:44.912166] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:03.198 [2024-11-20 17:46:44.913317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.456 [2024-11-20 17:46:45.052181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.456 [2024-11-20 17:46:45.142532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.456 [2024-11-20 17:46:45.142800] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.456 [2024-11-20 17:46:45.143045] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.456 [2024-11-20 17:46:45.143140] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.456 [2024-11-20 17:46:45.143243] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.456 [2024-11-20 17:46:45.143376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.456 [2024-11-20 17:46:45.143536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.456 [2024-11-20 17:46:45.144233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.456 [2024-11-20 17:46:45.144247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.776 [2024-11-20 17:46:45.325955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.776 Malloc0 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.776 [2024-11-20 17:46:45.382548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.776 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:03.776 [ 00:26:03.776 { 00:26:03.776 "allow_any_host": true, 00:26:03.776 "hosts": [], 00:26:03.776 "listen_addresses": [], 00:26:03.776 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:03.776 "subtype": "Discovery" 00:26:03.776 }, 00:26:03.776 { 00:26:03.776 "allow_any_host": true, 00:26:03.776 "hosts": [], 00:26:03.776 "listen_addresses": [ 00:26:03.776 { 00:26:03.776 "adrfam": "IPv4", 00:26:03.776 "traddr": "10.0.0.3", 00:26:03.776 "trsvcid": "4420", 00:26:03.777 "trtype": "TCP" 00:26:03.777 } 00:26:03.777 ], 00:26:03.777 "max_cntlid": 65519, 00:26:03.777 "max_namespaces": 2, 00:26:03.777 "min_cntlid": 1, 00:26:03.777 "model_number": "SPDK bdev Controller", 00:26:03.777 "namespaces": [ 00:26:03.777 { 00:26:03.777 "bdev_name": "Malloc0", 00:26:03.777 "name": "Malloc0", 00:26:03.777 "nguid": "23BD02938E7948AFB2C0D46B635F1E3A", 00:26:03.777 "nsid": 1, 00:26:03.777 "uuid": "23bd0293-8e79-48af-b2c0-d46b635f1e3a" 00:26:03.777 } 00:26:03.777 ], 00:26:03.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.777 "serial_number": "SPDK00000000000001", 00:26:03.777 "subtype": "NVMe" 00:26:03.777 } 00:26:03.777 ] 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=106561 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:03.777 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 Malloc1 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 [ 00:26:04.049 { 00:26:04.049 "allow_any_host": true, 00:26:04.049 "hosts": [], 00:26:04.049 "listen_addresses": [], 00:26:04.049 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:04.049 "subtype": "Discovery" 00:26:04.049 }, 00:26:04.049 { 00:26:04.049 "allow_any_host": true, 00:26:04.049 "hosts": [], 00:26:04.049 "listen_addresses": [ 00:26:04.049 { 00:26:04.049 "adrfam": "IPv4", 00:26:04.049 "traddr": "10.0.0.3", 00:26:04.049 "trsvcid": "4420", 00:26:04.049 "trtype": "TCP" 00:26:04.049 } 00:26:04.049 ], 00:26:04.049 "max_cntlid": 65519, 00:26:04.049 "max_namespaces": 2, 00:26:04.049 "min_cntlid": 1, 00:26:04.049 "model_number": "SPDK bdev Controller", 00:26:04.049 "namespaces": [ 00:26:04.049 { 00:26:04.049 "bdev_name": "Malloc0", 00:26:04.049 "name": "Malloc0", 00:26:04.049 "nguid": "23BD02938E7948AFB2C0D46B635F1E3A", 00:26:04.049 "nsid": 1, 00:26:04.049 "uuid": "23bd0293-8e79-48af-b2c0-d46b635f1e3a" 00:26:04.049 }, 00:26:04.049 { 00:26:04.049 "bdev_name": "Malloc1", 00:26:04.049 "name": "Malloc1", 00:26:04.049 "nguid": "32C6C9CAD1C4469C97F9473CCA6CB7CA", 00:26:04.049 "nsid": 2, 00:26:04.049 Asynchronous Event Request test 00:26:04.049 Attaching to 10.0.0.3 00:26:04.049 Attached to 10.0.0.3 00:26:04.049 Registering asynchronous event callbacks... 00:26:04.049 Starting namespace attribute notice tests for all controllers... 00:26:04.049 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:04.049 aer_cb - Changed Namespace 00:26:04.049 Cleaning up... 00:26:04.049 "uuid": "32c6c9ca-d1c4-469c-97f9-473cca6cb7ca" 00:26:04.049 } 00:26:04.049 ], 00:26:04.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.049 "serial_number": "SPDK00000000000001", 00:26:04.049 "subtype": "NVMe" 00:26:04.049 } 00:26:04.049 ] 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 106561 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:04.049 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:04.307 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.307 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:04.307 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.307 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.307 rmmod nvme_tcp 00:26:04.307 rmmod nvme_fabrics 00:26:04.307 rmmod nvme_keyring 00:26:04.307 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 106519 ']' 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 106519 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 106519 ']' 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 106519 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106519 00:26:04.308 killing process with pid 106519 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106519' 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 106519 00:26:04.308 17:46:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 106519 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:04.566 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:26:04.824 00:26:04.824 real 0m2.279s 00:26:04.824 user 0m4.221s 00:26:04.824 sys 0m0.825s 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.824 ************************************ 00:26:04.824 END TEST nvmf_aer 00:26:04.824 ************************************ 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.824 ************************************ 00:26:04.824 START TEST nvmf_async_init 00:26:04.824 ************************************ 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:04.824 * Looking for test storage... 00:26:04.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:26:04.824 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:05.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.084 --rc genhtml_branch_coverage=1 00:26:05.084 --rc genhtml_function_coverage=1 00:26:05.084 --rc genhtml_legend=1 00:26:05.084 --rc geninfo_all_blocks=1 00:26:05.084 --rc geninfo_unexecuted_blocks=1 00:26:05.084 00:26:05.084 ' 00:26:05.084 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:05.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.084 --rc genhtml_branch_coverage=1 00:26:05.084 --rc genhtml_function_coverage=1 00:26:05.084 --rc genhtml_legend=1 00:26:05.084 --rc geninfo_all_blocks=1 00:26:05.085 --rc geninfo_unexecuted_blocks=1 00:26:05.085 00:26:05.085 ' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:05.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.085 --rc genhtml_branch_coverage=1 00:26:05.085 --rc genhtml_function_coverage=1 00:26:05.085 --rc genhtml_legend=1 00:26:05.085 --rc geninfo_all_blocks=1 00:26:05.085 --rc geninfo_unexecuted_blocks=1 00:26:05.085 00:26:05.085 ' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:05.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.085 --rc genhtml_branch_coverage=1 00:26:05.085 --rc genhtml_function_coverage=1 00:26:05.085 --rc genhtml_legend=1 00:26:05.085 --rc geninfo_all_blocks=1 00:26:05.085 --rc geninfo_unexecuted_blocks=1 00:26:05.085 00:26:05.085 ' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.085 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6395001de8884e83b33fb9c57bb4dd28 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # nvmf_veth_init 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:05.085 Cannot find device "nvmf_init_br" 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:05.085 Cannot find device "nvmf_init_br2" 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:05.085 Cannot find device "nvmf_tgt_br" 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.085 Cannot find device "nvmf_tgt_br2" 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:26:05.085 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:05.086 Cannot find device "nvmf_init_br" 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:05.086 Cannot find device "nvmf_init_br2" 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:05.086 Cannot find device "nvmf_tgt_br" 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:05.086 Cannot find device "nvmf_tgt_br2" 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:05.086 Cannot find device "nvmf_br" 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:05.086 Cannot find device "nvmf_init_if" 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:26:05.086 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:05.345 Cannot find device "nvmf_init_if2" 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:05.345 17:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:05.345 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:05.345 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:05.345 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:05.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:05.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:26:05.346 00:26:05.346 --- 10.0.0.3 ping statistics --- 00:26:05.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.346 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:05.346 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:05.346 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:26:05.346 00:26:05.346 --- 10.0.0.4 ping statistics --- 00:26:05.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.346 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:05.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:26:05.346 00:26:05.346 --- 10.0.0.1 ping statistics --- 00:26:05.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.346 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:26:05.346 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:05.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:26:05.604 00:26:05.604 --- 10.0.0.2 ping statistics --- 00:26:05.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.604 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # return 0 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=106787 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 106787 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 106787 ']' 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.604 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:05.605 17:46:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 [2024-11-20 17:46:47.244125] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:05.605 [2024-11-20 17:46:47.244531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.605 [2024-11-20 17:46:47.388806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.863 [2024-11-20 17:46:47.518232] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.863 [2024-11-20 17:46:47.518585] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.863 [2024-11-20 17:46:47.518806] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.863 [2024-11-20 17:46:47.518963] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.863 [2024-11-20 17:46:47.519009] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.863 [2024-11-20 17:46:47.519283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.798 [2024-11-20 17:46:48.335979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.798 null0 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.798 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6395001de8884e83b33fb9c57bb4dd28 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:06.799 [2024-11-20 17:46:48.376032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.799 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 nvme0n1 00:26:07.060 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.060 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:07.060 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.060 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 [ 00:26:07.060 { 00:26:07.060 "aliases": [ 00:26:07.060 "6395001d-e888-4e83-b33f-b9c57bb4dd28" 00:26:07.060 ], 00:26:07.060 "assigned_rate_limits": { 00:26:07.060 "r_mbytes_per_sec": 0, 00:26:07.060 "rw_ios_per_sec": 0, 00:26:07.060 "rw_mbytes_per_sec": 0, 00:26:07.060 "w_mbytes_per_sec": 0 00:26:07.060 }, 00:26:07.060 "block_size": 512, 00:26:07.060 "claimed": false, 00:26:07.060 "driver_specific": { 00:26:07.060 "mp_policy": "active_passive", 00:26:07.060 "nvme": [ 00:26:07.060 { 00:26:07.060 "ctrlr_data": { 00:26:07.060 "ana_reporting": false, 00:26:07.060 "cntlid": 1, 00:26:07.060 "firmware_revision": "24.09.1", 00:26:07.060 "model_number": "SPDK bdev Controller", 00:26:07.060 "multi_ctrlr": true, 00:26:07.060 "oacs": { 00:26:07.060 "firmware": 0, 00:26:07.060 "format": 0, 00:26:07.060 "ns_manage": 0, 00:26:07.060 "security": 0 00:26:07.060 }, 00:26:07.060 "serial_number": "00000000000000000000", 00:26:07.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.060 "vendor_id": "0x8086" 00:26:07.060 }, 00:26:07.060 "ns_data": { 00:26:07.060 "can_share": true, 00:26:07.060 "id": 1 00:26:07.060 }, 00:26:07.060 "trid": { 00:26:07.060 "adrfam": "IPv4", 00:26:07.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.060 "traddr": "10.0.0.3", 00:26:07.060 "trsvcid": "4420", 00:26:07.060 "trtype": "TCP" 00:26:07.060 }, 00:26:07.060 "vs": { 00:26:07.060 "nvme_version": "1.3" 00:26:07.060 } 00:26:07.060 } 00:26:07.060 ] 00:26:07.060 }, 00:26:07.060 "memory_domains": [ 00:26:07.060 { 00:26:07.060 "dma_device_id": "system", 00:26:07.060 "dma_device_type": 1 00:26:07.060 } 00:26:07.060 ], 00:26:07.060 "name": "nvme0n1", 00:26:07.060 "num_blocks": 2097152, 00:26:07.060 "numa_id": -1, 00:26:07.060 "product_name": "NVMe disk", 00:26:07.060 "supported_io_types": { 00:26:07.060 "abort": true, 00:26:07.060 "compare": true, 00:26:07.060 "compare_and_write": true, 00:26:07.060 "copy": true, 00:26:07.060 "flush": true, 00:26:07.060 "get_zone_info": false, 00:26:07.060 "nvme_admin": true, 00:26:07.060 "nvme_io": true, 00:26:07.061 "nvme_io_md": false, 00:26:07.061 "nvme_iov_md": false, 00:26:07.061 "read": true, 00:26:07.061 "reset": true, 00:26:07.061 "seek_data": false, 00:26:07.061 "seek_hole": false, 00:26:07.061 "unmap": false, 00:26:07.061 "write": true, 00:26:07.061 "write_zeroes": true, 00:26:07.061 "zcopy": false, 00:26:07.061 "zone_append": false, 00:26:07.061 "zone_management": false 00:26:07.061 }, 00:26:07.061 "uuid": "6395001d-e888-4e83-b33f-b9c57bb4dd28", 00:26:07.061 "zoned": false 00:26:07.061 } 00:26:07.061 ] 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 [2024-11-20 17:46:48.650437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:07.061 [2024-11-20 17:46:48.651050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caaaa0 (9): Bad file descriptor 00:26:07.061 [2024-11-20 17:46:48.783398] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 [ 00:26:07.061 { 00:26:07.061 "aliases": [ 00:26:07.061 "6395001d-e888-4e83-b33f-b9c57bb4dd28" 00:26:07.061 ], 00:26:07.061 "assigned_rate_limits": { 00:26:07.061 "r_mbytes_per_sec": 0, 00:26:07.061 "rw_ios_per_sec": 0, 00:26:07.061 "rw_mbytes_per_sec": 0, 00:26:07.061 "w_mbytes_per_sec": 0 00:26:07.061 }, 00:26:07.061 "block_size": 512, 00:26:07.061 "claimed": false, 00:26:07.061 "driver_specific": { 00:26:07.061 "mp_policy": "active_passive", 00:26:07.061 "nvme": [ 00:26:07.061 { 00:26:07.061 "ctrlr_data": { 00:26:07.061 "ana_reporting": false, 00:26:07.061 "cntlid": 2, 00:26:07.061 "firmware_revision": "24.09.1", 00:26:07.061 "model_number": "SPDK bdev Controller", 00:26:07.061 "multi_ctrlr": true, 00:26:07.061 "oacs": { 00:26:07.061 "firmware": 0, 00:26:07.061 "format": 0, 00:26:07.061 "ns_manage": 0, 00:26:07.061 "security": 0 00:26:07.061 }, 00:26:07.061 "serial_number": "00000000000000000000", 00:26:07.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.061 "vendor_id": "0x8086" 00:26:07.061 }, 00:26:07.061 "ns_data": { 00:26:07.061 "can_share": true, 00:26:07.061 "id": 1 00:26:07.061 }, 00:26:07.061 "trid": { 00:26:07.061 "adrfam": "IPv4", 00:26:07.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.061 "traddr": "10.0.0.3", 00:26:07.061 "trsvcid": "4420", 00:26:07.061 "trtype": "TCP" 00:26:07.061 }, 00:26:07.061 "vs": { 00:26:07.061 "nvme_version": "1.3" 00:26:07.061 } 00:26:07.061 } 00:26:07.061 ] 00:26:07.061 }, 00:26:07.061 "memory_domains": [ 00:26:07.061 { 00:26:07.061 "dma_device_id": "system", 00:26:07.061 "dma_device_type": 1 00:26:07.061 } 00:26:07.061 ], 00:26:07.061 "name": "nvme0n1", 00:26:07.061 "num_blocks": 2097152, 00:26:07.061 "numa_id": -1, 00:26:07.061 "product_name": "NVMe disk", 00:26:07.061 "supported_io_types": { 00:26:07.061 "abort": true, 00:26:07.061 "compare": true, 00:26:07.061 "compare_and_write": true, 00:26:07.061 "copy": true, 00:26:07.061 "flush": true, 00:26:07.061 "get_zone_info": false, 00:26:07.061 "nvme_admin": true, 00:26:07.061 "nvme_io": true, 00:26:07.061 "nvme_io_md": false, 00:26:07.061 "nvme_iov_md": false, 00:26:07.061 "read": true, 00:26:07.061 "reset": true, 00:26:07.061 "seek_data": false, 00:26:07.061 "seek_hole": false, 00:26:07.061 "unmap": false, 00:26:07.061 "write": true, 00:26:07.061 "write_zeroes": true, 00:26:07.061 "zcopy": false, 00:26:07.061 "zone_append": false, 00:26:07.061 "zone_management": false 00:26:07.061 }, 00:26:07.061 "uuid": "6395001d-e888-4e83-b33f-b9c57bb4dd28", 00:26:07.061 "zoned": false 00:26:07.061 } 00:26:07.061 ] 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KS8TWBI6Vq 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:07.061 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KS8TWBI6Vq 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.KS8TWBI6Vq 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.320 [2024-11-20 17:46:48.870600] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:07.320 [2024-11-20 17:46:48.870839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.320 [2024-11-20 17:46:48.886595] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:07.320 nvme0n1 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.320 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.320 [ 00:26:07.320 { 00:26:07.320 "aliases": [ 00:26:07.320 "6395001d-e888-4e83-b33f-b9c57bb4dd28" 00:26:07.320 ], 00:26:07.320 "assigned_rate_limits": { 00:26:07.320 "r_mbytes_per_sec": 0, 00:26:07.320 "rw_ios_per_sec": 0, 00:26:07.320 "rw_mbytes_per_sec": 0, 00:26:07.320 "w_mbytes_per_sec": 0 00:26:07.320 }, 00:26:07.320 "block_size": 512, 00:26:07.320 "claimed": false, 00:26:07.320 "driver_specific": { 00:26:07.320 "mp_policy": "active_passive", 00:26:07.320 "nvme": [ 00:26:07.320 { 00:26:07.320 "ctrlr_data": { 00:26:07.320 "ana_reporting": false, 00:26:07.320 "cntlid": 3, 00:26:07.320 "firmware_revision": "24.09.1", 00:26:07.320 "model_number": "SPDK bdev Controller", 00:26:07.320 "multi_ctrlr": true, 00:26:07.320 "oacs": { 00:26:07.320 "firmware": 0, 00:26:07.320 "format": 0, 00:26:07.320 "ns_manage": 0, 00:26:07.320 "security": 0 00:26:07.320 }, 00:26:07.320 "serial_number": "00000000000000000000", 00:26:07.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.320 "vendor_id": "0x8086" 00:26:07.320 }, 00:26:07.320 "ns_data": { 00:26:07.320 "can_share": true, 00:26:07.320 "id": 1 00:26:07.320 }, 00:26:07.320 "trid": { 00:26:07.320 "adrfam": "IPv4", 00:26:07.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.320 "traddr": "10.0.0.3", 00:26:07.320 "trsvcid": "4421", 00:26:07.320 "trtype": "TCP" 00:26:07.320 }, 00:26:07.320 "vs": { 00:26:07.320 "nvme_version": "1.3" 00:26:07.320 } 00:26:07.320 } 00:26:07.320 ] 00:26:07.320 }, 00:26:07.321 "memory_domains": [ 00:26:07.321 { 00:26:07.321 "dma_device_id": "system", 00:26:07.321 "dma_device_type": 1 00:26:07.321 } 00:26:07.321 ], 00:26:07.321 "name": "nvme0n1", 00:26:07.321 "num_blocks": 2097152, 00:26:07.321 "numa_id": -1, 00:26:07.321 "product_name": "NVMe disk", 00:26:07.321 "supported_io_types": { 00:26:07.321 "abort": true, 00:26:07.321 "compare": true, 00:26:07.321 "compare_and_write": true, 00:26:07.321 "copy": true, 00:26:07.321 "flush": true, 00:26:07.321 "get_zone_info": false, 00:26:07.321 "nvme_admin": true, 00:26:07.321 "nvme_io": true, 00:26:07.321 "nvme_io_md": false, 00:26:07.321 "nvme_iov_md": false, 00:26:07.321 "read": true, 00:26:07.321 "reset": true, 00:26:07.321 "seek_data": false, 00:26:07.321 "seek_hole": false, 00:26:07.321 "unmap": false, 00:26:07.321 "write": true, 00:26:07.321 "write_zeroes": true, 00:26:07.321 "zcopy": false, 00:26:07.321 "zone_append": false, 00:26:07.321 "zone_management": false 00:26:07.321 }, 00:26:07.321 "uuid": "6395001d-e888-4e83-b33f-b9c57bb4dd28", 00:26:07.321 "zoned": false 00:26:07.321 } 00:26:07.321 ] 00:26:07.321 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.321 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.321 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.321 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.321 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.321 17:46:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.KS8TWBI6Vq 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:07.321 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:07.321 rmmod nvme_tcp 00:26:07.321 rmmod nvme_fabrics 00:26:07.321 rmmod nvme_keyring 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 106787 ']' 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 106787 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 106787 ']' 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 106787 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106787 00:26:07.580 killing process with pid 106787 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106787' 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 106787 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 106787 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:07.580 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.838 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:26:08.123 00:26:08.123 real 0m3.145s 00:26:08.123 user 0m2.675s 00:26:08.123 sys 0m0.877s 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:08.123 ************************************ 00:26:08.123 END TEST nvmf_async_init 00:26:08.123 ************************************ 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.123 ************************************ 00:26:08.123 START TEST dma 00:26:08.123 ************************************ 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:08.123 * Looking for test storage... 00:26:08.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.123 --rc genhtml_branch_coverage=1 00:26:08.123 --rc genhtml_function_coverage=1 00:26:08.123 --rc genhtml_legend=1 00:26:08.123 --rc geninfo_all_blocks=1 00:26:08.123 --rc geninfo_unexecuted_blocks=1 00:26:08.123 00:26:08.123 ' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.123 --rc genhtml_branch_coverage=1 00:26:08.123 --rc genhtml_function_coverage=1 00:26:08.123 --rc genhtml_legend=1 00:26:08.123 --rc geninfo_all_blocks=1 00:26:08.123 --rc geninfo_unexecuted_blocks=1 00:26:08.123 00:26:08.123 ' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.123 --rc genhtml_branch_coverage=1 00:26:08.123 --rc genhtml_function_coverage=1 00:26:08.123 --rc genhtml_legend=1 00:26:08.123 --rc geninfo_all_blocks=1 00:26:08.123 --rc geninfo_unexecuted_blocks=1 00:26:08.123 00:26:08.123 ' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.123 --rc genhtml_branch_coverage=1 00:26:08.123 --rc genhtml_function_coverage=1 00:26:08.123 --rc genhtml_legend=1 00:26:08.123 --rc geninfo_all_blocks=1 00:26:08.123 --rc geninfo_unexecuted_blocks=1 00:26:08.123 00:26:08.123 ' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.123 17:46:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.124 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.124 17:46:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:08.390 00:26:08.390 real 0m0.215s 00:26:08.390 user 0m0.134s 00:26:08.390 sys 0m0.090s 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:08.390 ************************************ 00:26:08.390 END TEST dma 00:26:08.390 ************************************ 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.390 ************************************ 00:26:08.390 START TEST nvmf_identify 00:26:08.390 ************************************ 00:26:08.390 17:46:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:08.390 * Looking for test storage... 00:26:08.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.390 --rc genhtml_branch_coverage=1 00:26:08.390 --rc genhtml_function_coverage=1 00:26:08.390 --rc genhtml_legend=1 00:26:08.390 --rc geninfo_all_blocks=1 00:26:08.390 --rc geninfo_unexecuted_blocks=1 00:26:08.390 00:26:08.390 ' 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.390 --rc genhtml_branch_coverage=1 00:26:08.390 --rc genhtml_function_coverage=1 00:26:08.390 --rc genhtml_legend=1 00:26:08.390 --rc geninfo_all_blocks=1 00:26:08.390 --rc geninfo_unexecuted_blocks=1 00:26:08.390 00:26:08.390 ' 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.390 --rc genhtml_branch_coverage=1 00:26:08.390 --rc genhtml_function_coverage=1 00:26:08.390 --rc genhtml_legend=1 00:26:08.390 --rc geninfo_all_blocks=1 00:26:08.390 --rc geninfo_unexecuted_blocks=1 00:26:08.390 00:26:08.390 ' 00:26:08.390 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.390 --rc genhtml_branch_coverage=1 00:26:08.390 --rc genhtml_function_coverage=1 00:26:08.390 --rc genhtml_legend=1 00:26:08.390 --rc geninfo_all_blocks=1 00:26:08.390 --rc geninfo_unexecuted_blocks=1 00:26:08.391 00:26:08.391 ' 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.391 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:08.391 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:08.649 Cannot find device "nvmf_init_br" 00:26:08.649 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:26:08.649 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:08.649 Cannot find device "nvmf_init_br2" 00:26:08.649 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:08.650 Cannot find device "nvmf_tgt_br" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:08.650 Cannot find device "nvmf_tgt_br2" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:08.650 Cannot find device "nvmf_init_br" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:08.650 Cannot find device "nvmf_init_br2" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:08.650 Cannot find device "nvmf_tgt_br" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:08.650 Cannot find device "nvmf_tgt_br2" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:08.650 Cannot find device "nvmf_br" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:08.650 Cannot find device "nvmf_init_if" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:08.650 Cannot find device "nvmf_init_if2" 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:08.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:08.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:08.650 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:08.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:08.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:26:08.910 00:26:08.910 --- 10.0.0.3 ping statistics --- 00:26:08.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.910 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:08.910 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:08.910 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:26:08.910 00:26:08.910 --- 10.0.0.4 ping statistics --- 00:26:08.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.910 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:08.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:08.910 00:26:08.910 --- 10.0.0.1 ping statistics --- 00:26:08.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.910 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:08.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:08.910 00:26:08.910 --- 10.0.0.2 ping statistics --- 00:26:08.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.910 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=107120 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 107120 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 107120 ']' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.910 17:46:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.910 [2024-11-20 17:46:50.674309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:08.910 [2024-11-20 17:46:50.674433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.169 [2024-11-20 17:46:50.819733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:09.428 [2024-11-20 17:46:50.958722] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.428 [2024-11-20 17:46:50.958791] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.428 [2024-11-20 17:46:50.958805] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.428 [2024-11-20 17:46:50.958816] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.428 [2024-11-20 17:46:50.958825] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.428 [2024-11-20 17:46:50.959009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.428 [2024-11-20 17:46:50.959333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.428 [2024-11-20 17:46:50.960037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.428 [2024-11-20 17:46:50.960098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:09.996 [2024-11-20 17:46:51.761407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:09.996 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 Malloc0 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 [2024-11-20 17:46:51.868896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 [ 00:26:10.255 { 00:26:10.255 "allow_any_host": true, 00:26:10.255 "hosts": [], 00:26:10.255 "listen_addresses": [ 00:26:10.255 { 00:26:10.255 "adrfam": "IPv4", 00:26:10.255 "traddr": "10.0.0.3", 00:26:10.255 "trsvcid": "4420", 00:26:10.255 "trtype": "TCP" 00:26:10.255 } 00:26:10.255 ], 00:26:10.255 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:10.255 "subtype": "Discovery" 00:26:10.255 }, 00:26:10.255 { 00:26:10.255 "allow_any_host": true, 00:26:10.255 "hosts": [], 00:26:10.255 "listen_addresses": [ 00:26:10.255 { 00:26:10.255 "adrfam": "IPv4", 00:26:10.255 "traddr": "10.0.0.3", 00:26:10.255 "trsvcid": "4420", 00:26:10.255 "trtype": "TCP" 00:26:10.255 } 00:26:10.255 ], 00:26:10.255 "max_cntlid": 65519, 00:26:10.255 "max_namespaces": 32, 00:26:10.255 "min_cntlid": 1, 00:26:10.255 "model_number": "SPDK bdev Controller", 00:26:10.255 "namespaces": [ 00:26:10.255 { 00:26:10.255 "bdev_name": "Malloc0", 00:26:10.255 "eui64": "ABCDEF0123456789", 00:26:10.255 "name": "Malloc0", 00:26:10.255 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:10.255 "nsid": 1, 00:26:10.255 "uuid": "a796bb2d-8529-4cbb-b936-3284e3db3ada" 00:26:10.255 } 00:26:10.255 ], 00:26:10.255 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.255 "serial_number": "SPDK00000000000001", 00:26:10.255 "subtype": "NVMe" 00:26:10.255 } 00:26:10.255 ] 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.255 17:46:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:10.255 [2024-11-20 17:46:51.924447] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:10.255 [2024-11-20 17:46:51.924693] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107173 ] 00:26:10.519 [2024-11-20 17:46:52.062626] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:10.519 [2024-11-20 17:46:52.062717] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:10.519 [2024-11-20 17:46:52.062724] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:10.519 [2024-11-20 17:46:52.062740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:10.519 [2024-11-20 17:46:52.062754] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:10.519 [2024-11-20 17:46:52.067197] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:10.519 [2024-11-20 17:46:52.067285] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12dea80 0 00:26:10.519 [2024-11-20 17:46:52.075095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:10.519 [2024-11-20 17:46:52.075124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:10.519 [2024-11-20 17:46:52.075131] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:10.519 [2024-11-20 17:46:52.075135] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:10.519 [2024-11-20 17:46:52.075177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.075185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.075190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.519 [2024-11-20 17:46:52.075222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:10.519 [2024-11-20 17:46:52.075261] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.519 [2024-11-20 17:46:52.083087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.519 [2024-11-20 17:46:52.083113] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.519 [2024-11-20 17:46:52.083119] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083125] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.519 [2024-11-20 17:46:52.083143] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:10.519 [2024-11-20 17:46:52.083152] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:10.519 [2024-11-20 17:46:52.083160] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:10.519 [2024-11-20 17:46:52.083183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083189] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083193] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.519 [2024-11-20 17:46:52.083205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.519 [2024-11-20 17:46:52.083238] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.519 [2024-11-20 17:46:52.083337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.519 [2024-11-20 17:46:52.083345] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.519 [2024-11-20 17:46:52.083349] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083353] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.519 [2024-11-20 17:46:52.083360] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:10.519 [2024-11-20 17:46:52.083368] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:10.519 [2024-11-20 17:46:52.083377] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.519 [2024-11-20 17:46:52.083399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.519 [2024-11-20 17:46:52.083422] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.519 [2024-11-20 17:46:52.083489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.519 [2024-11-20 17:46:52.083496] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.519 [2024-11-20 17:46:52.083500] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083504] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.519 [2024-11-20 17:46:52.083511] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:10.519 [2024-11-20 17:46:52.083522] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:10.519 [2024-11-20 17:46:52.083531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.519 [2024-11-20 17:46:52.083547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.519 [2024-11-20 17:46:52.083567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.519 [2024-11-20 17:46:52.083642] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.519 [2024-11-20 17:46:52.083649] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.519 [2024-11-20 17:46:52.083653] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083657] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.519 [2024-11-20 17:46:52.083663] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:10.519 [2024-11-20 17:46:52.083674] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083679] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083683] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.519 [2024-11-20 17:46:52.083690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.519 [2024-11-20 17:46:52.083710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.519 [2024-11-20 17:46:52.083779] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.519 [2024-11-20 17:46:52.083786] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.519 [2024-11-20 17:46:52.083790] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083794] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.519 [2024-11-20 17:46:52.083800] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:10.519 [2024-11-20 17:46:52.083806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:10.519 [2024-11-20 17:46:52.083814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:10.519 [2024-11-20 17:46:52.083920] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:10.519 [2024-11-20 17:46:52.083926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:10.519 [2024-11-20 17:46:52.083937] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083942] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.519 [2024-11-20 17:46:52.083946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.519 [2024-11-20 17:46:52.083953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.519 [2024-11-20 17:46:52.083974] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.519 [2024-11-20 17:46:52.084040] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.519 [2024-11-20 17:46:52.084047] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.519 [2024-11-20 17:46:52.084051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.520 [2024-11-20 17:46:52.084061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:10.520 [2024-11-20 17:46:52.084088] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.520 [2024-11-20 17:46:52.084128] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.520 [2024-11-20 17:46:52.084205] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.520 [2024-11-20 17:46:52.084212] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.520 [2024-11-20 17:46:52.084215] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.520 [2024-11-20 17:46:52.084225] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:10.520 [2024-11-20 17:46:52.084231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:10.520 [2024-11-20 17:46:52.084239] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:10.520 [2024-11-20 17:46:52.084259] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:10.520 [2024-11-20 17:46:52.084270] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.520 [2024-11-20 17:46:52.084305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.520 [2024-11-20 17:46:52.084425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.520 [2024-11-20 17:46:52.084432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.520 [2024-11-20 17:46:52.084436] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084441] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dea80): datao=0, datal=4096, cccid=0 00:26:10.520 [2024-11-20 17:46:52.084454] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1324cc0) on tqpair(0x12dea80): expected_datao=0, payload_size=4096 00:26:10.520 [2024-11-20 17:46:52.084459] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084469] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084474] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.520 [2024-11-20 17:46:52.084491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.520 [2024-11-20 17:46:52.084495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084499] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.520 [2024-11-20 17:46:52.084509] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:10.520 [2024-11-20 17:46:52.084515] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:10.520 [2024-11-20 17:46:52.084521] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:10.520 [2024-11-20 17:46:52.084527] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:10.520 [2024-11-20 17:46:52.084533] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:10.520 [2024-11-20 17:46:52.084538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:10.520 [2024-11-20 17:46:52.084549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:10.520 [2024-11-20 17:46:52.084558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:10.520 [2024-11-20 17:46:52.084597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.520 [2024-11-20 17:46:52.084668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.520 [2024-11-20 17:46:52.084675] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.520 [2024-11-20 17:46:52.084680] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084684] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.520 [2024-11-20 17:46:52.084693] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.520 [2024-11-20 17:46:52.084716] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084722] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084726] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.520 [2024-11-20 17:46:52.084739] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.520 [2024-11-20 17:46:52.084761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084768] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.520 [2024-11-20 17:46:52.084780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:10.520 [2024-11-20 17:46:52.084794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:10.520 [2024-11-20 17:46:52.084802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.084814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.520 [2024-11-20 17:46:52.084838] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324cc0, cid 0, qid 0 00:26:10.520 [2024-11-20 17:46:52.084845] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324e40, cid 1, qid 0 00:26:10.520 [2024-11-20 17:46:52.084850] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1324fc0, cid 2, qid 0 00:26:10.520 [2024-11-20 17:46:52.084856] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.520 [2024-11-20 17:46:52.084861] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13252c0, cid 4, qid 0 00:26:10.520 [2024-11-20 17:46:52.084981] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.520 [2024-11-20 17:46:52.084988] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.520 [2024-11-20 17:46:52.084992] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.084996] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13252c0) on tqpair=0x12dea80 00:26:10.520 [2024-11-20 17:46:52.085008] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:10.520 [2024-11-20 17:46:52.085014] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:10.520 [2024-11-20 17:46:52.085025] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.085030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dea80) 00:26:10.520 [2024-11-20 17:46:52.085038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.520 [2024-11-20 17:46:52.085058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13252c0, cid 4, qid 0 00:26:10.520 [2024-11-20 17:46:52.085158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.520 [2024-11-20 17:46:52.085166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.520 [2024-11-20 17:46:52.085170] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.085174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dea80): datao=0, datal=4096, cccid=4 00:26:10.520 [2024-11-20 17:46:52.085179] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13252c0) on tqpair(0x12dea80): expected_datao=0, payload_size=4096 00:26:10.520 [2024-11-20 17:46:52.085184] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.085192] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.085196] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.085205] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.520 [2024-11-20 17:46:52.085211] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.520 [2024-11-20 17:46:52.085215] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.085220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13252c0) on tqpair=0x12dea80 00:26:10.520 [2024-11-20 17:46:52.085235] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:10.520 [2024-11-20 17:46:52.085300] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.520 [2024-11-20 17:46:52.085311] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dea80) 00:26:10.521 [2024-11-20 17:46:52.085320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.521 [2024-11-20 17:46:52.085329] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.085333] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.085337] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12dea80) 00:26:10.521 [2024-11-20 17:46:52.085344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.521 [2024-11-20 17:46:52.085374] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13252c0, cid 4, qid 0 00:26:10.521 [2024-11-20 17:46:52.085383] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325440, cid 5, qid 0 00:26:10.521 [2024-11-20 17:46:52.085552] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.521 [2024-11-20 17:46:52.085560] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.521 [2024-11-20 17:46:52.085564] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.085568] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dea80): datao=0, datal=1024, cccid=4 00:26:10.521 [2024-11-20 17:46:52.085573] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13252c0) on tqpair(0x12dea80): expected_datao=0, payload_size=1024 00:26:10.521 [2024-11-20 17:46:52.085578] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.085585] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.085590] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.085596] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.521 [2024-11-20 17:46:52.085602] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.521 [2024-11-20 17:46:52.085606] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.085610] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325440) on tqpair=0x12dea80 00:26:10.521 [2024-11-20 17:46:52.126218] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.521 [2024-11-20 17:46:52.126270] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.521 [2024-11-20 17:46:52.126277] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13252c0) on tqpair=0x12dea80 00:26:10.521 [2024-11-20 17:46:52.126315] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dea80) 00:26:10.521 [2024-11-20 17:46:52.126340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.521 [2024-11-20 17:46:52.126412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13252c0, cid 4, qid 0 00:26:10.521 [2024-11-20 17:46:52.126563] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.521 [2024-11-20 17:46:52.126571] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.521 [2024-11-20 17:46:52.126576] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126580] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dea80): datao=0, datal=3072, cccid=4 00:26:10.521 [2024-11-20 17:46:52.126587] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13252c0) on tqpair(0x12dea80): expected_datao=0, payload_size=3072 00:26:10.521 [2024-11-20 17:46:52.126592] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126602] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126607] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126617] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.521 [2024-11-20 17:46:52.126624] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.521 [2024-11-20 17:46:52.126628] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13252c0) on tqpair=0x12dea80 00:26:10.521 [2024-11-20 17:46:52.126643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dea80) 00:26:10.521 [2024-11-20 17:46:52.126655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.521 [2024-11-20 17:46:52.126686] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13252c0, cid 4, qid 0 00:26:10.521 [2024-11-20 17:46:52.126780] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.521 [2024-11-20 17:46:52.126787] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.521 [2024-11-20 17:46:52.126791] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126795] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dea80): datao=0, datal=8, cccid=4 00:26:10.521 [2024-11-20 17:46:52.126800] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13252c0) on tqpair(0x12dea80): expected_datao=0, payload_size=8 00:26:10.521 [2024-11-20 17:46:52.126805] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126812] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.126816] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.171123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.521 [2024-11-20 17:46:52.171169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.521 [2024-11-20 17:46:52.171174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.521 [2024-11-20 17:46:52.171182] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13252c0) on tqpair=0x12dea80 00:26:10.521 ===================================================== 00:26:10.521 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:10.521 ===================================================== 00:26:10.521 Controller Capabilities/Features 00:26:10.521 ================================ 00:26:10.521 Vendor ID: 0000 00:26:10.521 Subsystem Vendor ID: 0000 00:26:10.521 Serial Number: .................... 00:26:10.521 Model Number: ........................................ 00:26:10.521 Firmware Version: 24.09.1 00:26:10.521 Recommended Arb Burst: 0 00:26:10.521 IEEE OUI Identifier: 00 00 00 00:26:10.521 Multi-path I/O 00:26:10.521 May have multiple subsystem ports: No 00:26:10.521 May have multiple controllers: No 00:26:10.521 Associated with SR-IOV VF: No 00:26:10.521 Max Data Transfer Size: 131072 00:26:10.521 Max Number of Namespaces: 0 00:26:10.521 Max Number of I/O Queues: 1024 00:26:10.521 NVMe Specification Version (VS): 1.3 00:26:10.521 NVMe Specification Version (Identify): 1.3 00:26:10.521 Maximum Queue Entries: 128 00:26:10.521 Contiguous Queues Required: Yes 00:26:10.521 Arbitration Mechanisms Supported 00:26:10.521 Weighted Round Robin: Not Supported 00:26:10.521 Vendor Specific: Not Supported 00:26:10.521 Reset Timeout: 15000 ms 00:26:10.521 Doorbell Stride: 4 bytes 00:26:10.521 NVM Subsystem Reset: Not Supported 00:26:10.521 Command Sets Supported 00:26:10.521 NVM Command Set: Supported 00:26:10.521 Boot Partition: Not Supported 00:26:10.521 Memory Page Size Minimum: 4096 bytes 00:26:10.521 Memory Page Size Maximum: 4096 bytes 00:26:10.521 Persistent Memory Region: Not Supported 00:26:10.521 Optional Asynchronous Events Supported 00:26:10.521 Namespace Attribute Notices: Not Supported 00:26:10.521 Firmware Activation Notices: Not Supported 00:26:10.521 ANA Change Notices: Not Supported 00:26:10.521 PLE Aggregate Log Change Notices: Not Supported 00:26:10.521 LBA Status Info Alert Notices: Not Supported 00:26:10.521 EGE Aggregate Log Change Notices: Not Supported 00:26:10.521 Normal NVM Subsystem Shutdown event: Not Supported 00:26:10.521 Zone Descriptor Change Notices: Not Supported 00:26:10.521 Discovery Log Change Notices: Supported 00:26:10.521 Controller Attributes 00:26:10.521 128-bit Host Identifier: Not Supported 00:26:10.521 Non-Operational Permissive Mode: Not Supported 00:26:10.521 NVM Sets: Not Supported 00:26:10.521 Read Recovery Levels: Not Supported 00:26:10.521 Endurance Groups: Not Supported 00:26:10.521 Predictable Latency Mode: Not Supported 00:26:10.521 Traffic Based Keep ALive: Not Supported 00:26:10.521 Namespace Granularity: Not Supported 00:26:10.521 SQ Associations: Not Supported 00:26:10.521 UUID List: Not Supported 00:26:10.521 Multi-Domain Subsystem: Not Supported 00:26:10.521 Fixed Capacity Management: Not Supported 00:26:10.521 Variable Capacity Management: Not Supported 00:26:10.521 Delete Endurance Group: Not Supported 00:26:10.521 Delete NVM Set: Not Supported 00:26:10.521 Extended LBA Formats Supported: Not Supported 00:26:10.521 Flexible Data Placement Supported: Not Supported 00:26:10.521 00:26:10.521 Controller Memory Buffer Support 00:26:10.521 ================================ 00:26:10.521 Supported: No 00:26:10.521 00:26:10.521 Persistent Memory Region Support 00:26:10.521 ================================ 00:26:10.521 Supported: No 00:26:10.521 00:26:10.521 Admin Command Set Attributes 00:26:10.521 ============================ 00:26:10.521 Security Send/Receive: Not Supported 00:26:10.521 Format NVM: Not Supported 00:26:10.521 Firmware Activate/Download: Not Supported 00:26:10.521 Namespace Management: Not Supported 00:26:10.521 Device Self-Test: Not Supported 00:26:10.521 Directives: Not Supported 00:26:10.521 NVMe-MI: Not Supported 00:26:10.521 Virtualization Management: Not Supported 00:26:10.521 Doorbell Buffer Config: Not Supported 00:26:10.521 Get LBA Status Capability: Not Supported 00:26:10.521 Command & Feature Lockdown Capability: Not Supported 00:26:10.521 Abort Command Limit: 1 00:26:10.521 Async Event Request Limit: 4 00:26:10.521 Number of Firmware Slots: N/A 00:26:10.521 Firmware Slot 1 Read-Only: N/A 00:26:10.521 Firmware Activation Without Reset: N/A 00:26:10.522 Multiple Update Detection Support: N/A 00:26:10.522 Firmware Update Granularity: No Information Provided 00:26:10.522 Per-Namespace SMART Log: No 00:26:10.522 Asymmetric Namespace Access Log Page: Not Supported 00:26:10.522 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:10.522 Command Effects Log Page: Not Supported 00:26:10.522 Get Log Page Extended Data: Supported 00:26:10.522 Telemetry Log Pages: Not Supported 00:26:10.522 Persistent Event Log Pages: Not Supported 00:26:10.522 Supported Log Pages Log Page: May Support 00:26:10.522 Commands Supported & Effects Log Page: Not Supported 00:26:10.522 Feature Identifiers & Effects Log Page:May Support 00:26:10.522 NVMe-MI Commands & Effects Log Page: May Support 00:26:10.522 Data Area 4 for Telemetry Log: Not Supported 00:26:10.522 Error Log Page Entries Supported: 128 00:26:10.522 Keep Alive: Not Supported 00:26:10.522 00:26:10.522 NVM Command Set Attributes 00:26:10.522 ========================== 00:26:10.522 Submission Queue Entry Size 00:26:10.522 Max: 1 00:26:10.522 Min: 1 00:26:10.522 Completion Queue Entry Size 00:26:10.522 Max: 1 00:26:10.522 Min: 1 00:26:10.522 Number of Namespaces: 0 00:26:10.522 Compare Command: Not Supported 00:26:10.522 Write Uncorrectable Command: Not Supported 00:26:10.522 Dataset Management Command: Not Supported 00:26:10.522 Write Zeroes Command: Not Supported 00:26:10.522 Set Features Save Field: Not Supported 00:26:10.522 Reservations: Not Supported 00:26:10.522 Timestamp: Not Supported 00:26:10.522 Copy: Not Supported 00:26:10.522 Volatile Write Cache: Not Present 00:26:10.522 Atomic Write Unit (Normal): 1 00:26:10.522 Atomic Write Unit (PFail): 1 00:26:10.522 Atomic Compare & Write Unit: 1 00:26:10.522 Fused Compare & Write: Supported 00:26:10.522 Scatter-Gather List 00:26:10.522 SGL Command Set: Supported 00:26:10.522 SGL Keyed: Supported 00:26:10.522 SGL Bit Bucket Descriptor: Not Supported 00:26:10.522 SGL Metadata Pointer: Not Supported 00:26:10.522 Oversized SGL: Not Supported 00:26:10.522 SGL Metadata Address: Not Supported 00:26:10.522 SGL Offset: Supported 00:26:10.522 Transport SGL Data Block: Not Supported 00:26:10.522 Replay Protected Memory Block: Not Supported 00:26:10.522 00:26:10.522 Firmware Slot Information 00:26:10.522 ========================= 00:26:10.522 Active slot: 0 00:26:10.522 00:26:10.522 00:26:10.522 Error Log 00:26:10.522 ========= 00:26:10.522 00:26:10.522 Active Namespaces 00:26:10.522 ================= 00:26:10.522 Discovery Log Page 00:26:10.522 ================== 00:26:10.522 Generation Counter: 2 00:26:10.522 Number of Records: 2 00:26:10.522 Record Format: 0 00:26:10.522 00:26:10.522 Discovery Log Entry 0 00:26:10.522 ---------------------- 00:26:10.522 Transport Type: 3 (TCP) 00:26:10.522 Address Family: 1 (IPv4) 00:26:10.522 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:10.522 Entry Flags: 00:26:10.522 Duplicate Returned Information: 1 00:26:10.522 Explicit Persistent Connection Support for Discovery: 1 00:26:10.522 Transport Requirements: 00:26:10.522 Secure Channel: Not Required 00:26:10.522 Port ID: 0 (0x0000) 00:26:10.522 Controller ID: 65535 (0xffff) 00:26:10.522 Admin Max SQ Size: 128 00:26:10.522 Transport Service Identifier: 4420 00:26:10.522 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:10.522 Transport Address: 10.0.0.3 00:26:10.522 Discovery Log Entry 1 00:26:10.522 ---------------------- 00:26:10.522 Transport Type: 3 (TCP) 00:26:10.522 Address Family: 1 (IPv4) 00:26:10.522 Subsystem Type: 2 (NVM Subsystem) 00:26:10.522 Entry Flags: 00:26:10.522 Duplicate Returned Information: 0 00:26:10.522 Explicit Persistent Connection Support for Discovery: 0 00:26:10.522 Transport Requirements: 00:26:10.522 Secure Channel: Not Required 00:26:10.522 Port ID: 0 (0x0000) 00:26:10.522 Controller ID: 65535 (0xffff) 00:26:10.522 Admin Max SQ Size: 128 00:26:10.522 Transport Service Identifier: 4420 00:26:10.522 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:10.522 Transport Address: 10.0.0.3 [2024-11-20 17:46:52.171402] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:10.522 [2024-11-20 17:46:52.171423] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324cc0) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.171435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.522 [2024-11-20 17:46:52.171443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324e40) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.171448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.522 [2024-11-20 17:46:52.171455] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1324fc0) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.171460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.522 [2024-11-20 17:46:52.171466] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.171471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.522 [2024-11-20 17:46:52.171487] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171493] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171497] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.522 [2024-11-20 17:46:52.171512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.522 [2024-11-20 17:46:52.171555] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.522 [2024-11-20 17:46:52.171690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.522 [2024-11-20 17:46:52.171698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.522 [2024-11-20 17:46:52.171702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.171716] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171724] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.522 [2024-11-20 17:46:52.171732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.522 [2024-11-20 17:46:52.171760] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.522 [2024-11-20 17:46:52.171863] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.522 [2024-11-20 17:46:52.171871] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.522 [2024-11-20 17:46:52.171875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171879] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.171885] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:10.522 [2024-11-20 17:46:52.171897] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:10.522 [2024-11-20 17:46:52.171908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171914] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.171918] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.522 [2024-11-20 17:46:52.171925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.522 [2024-11-20 17:46:52.171948] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.522 [2024-11-20 17:46:52.172021] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.522 [2024-11-20 17:46:52.172028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.522 [2024-11-20 17:46:52.172032] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.172036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.172049] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.172054] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.172058] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.522 [2024-11-20 17:46:52.172082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.522 [2024-11-20 17:46:52.172106] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.522 [2024-11-20 17:46:52.172171] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.522 [2024-11-20 17:46:52.172178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.522 [2024-11-20 17:46:52.172182] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.172186] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.522 [2024-11-20 17:46:52.172197] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.172202] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.522 [2024-11-20 17:46:52.172206] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.522 [2024-11-20 17:46:52.172214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.522 [2024-11-20 17:46:52.172235] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.172297] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.172305] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.172308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172313] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.172323] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172332] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.172340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.172360] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.172437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.172449] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.172454] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172458] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.172470] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172475] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172479] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.172487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.172508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.172574] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.172592] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.172596] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172601] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.172612] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172617] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.172629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.172650] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.172724] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.172731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.172735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172739] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.172750] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172759] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.172766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.172786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.172859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.172866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.172870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.172885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.172894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.172901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.172921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.172988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.172995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.172999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.173014] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173019] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173023] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.173031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.173050] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.173125] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.173133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.173137] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173141] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.173152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173158] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173162] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.173169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.173191] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.173261] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.173268] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.173272] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173276] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.173287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173292] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173296] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.173304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.173324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.173407] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.173414] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.173418] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173422] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.173433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173442] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.173449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.173469] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.523 [2024-11-20 17:46:52.173549] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.523 [2024-11-20 17:46:52.173556] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.523 [2024-11-20 17:46:52.173560] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173564] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.523 [2024-11-20 17:46:52.173575] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173580] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.523 [2024-11-20 17:46:52.173584] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.523 [2024-11-20 17:46:52.173591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-11-20 17:46:52.173611] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.173680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.173692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.173697] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173701] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.173712] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173718] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173722] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.173729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.173750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.173812] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.173819] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.173823] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.173837] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173847] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.173854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.173874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.173945] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.173952] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.173956] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173960] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.173972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173978] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.173982] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.173989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.174092] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.174101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.174105] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174109] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.174120] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174125] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.174137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.174243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.174250] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.174254] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.174269] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174274] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174278] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.174286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.174372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.174379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.174383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174387] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.174398] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174403] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174407] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.174414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174433] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.174501] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.174508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.174512] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174516] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.174526] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174531] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174535] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.174543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174562] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.174641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.174648] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.174652] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174656] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.174667] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.174683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.174761] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.174768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.174772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.174787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174796] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.174804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174823] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.174890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.174908] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.174912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174917] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.174928] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174933] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.174937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.174945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.174965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.175052] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.175059] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.175063] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.179088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.179108] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.179114] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.179118] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dea80) 00:26:10.524 [2024-11-20 17:46:52.179128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.524 [2024-11-20 17:46:52.179155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1325140, cid 3, qid 0 00:26:10.524 [2024-11-20 17:46:52.179236] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.524 [2024-11-20 17:46:52.179244] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.524 [2024-11-20 17:46:52.179248] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.524 [2024-11-20 17:46:52.179252] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1325140) on tqpair=0x12dea80 00:26:10.524 [2024-11-20 17:46:52.179261] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:10.524 00:26:10.525 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:10.525 [2024-11-20 17:46:52.220840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:10.525 [2024-11-20 17:46:52.220908] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107181 ] 00:26:10.787 [2024-11-20 17:46:52.362676] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:10.787 [2024-11-20 17:46:52.362776] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:10.787 [2024-11-20 17:46:52.362784] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:10.787 [2024-11-20 17:46:52.362800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:10.787 [2024-11-20 17:46:52.362813] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:10.787 [2024-11-20 17:46:52.367235] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:10.787 [2024-11-20 17:46:52.367317] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2379a80 0 00:26:10.787 [2024-11-20 17:46:52.367406] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:10.787 [2024-11-20 17:46:52.367417] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:10.787 [2024-11-20 17:46:52.367422] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:10.787 [2024-11-20 17:46:52.367426] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:10.787 [2024-11-20 17:46:52.367467] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.787 [2024-11-20 17:46:52.367474] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.787 [2024-11-20 17:46:52.367479] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.787 [2024-11-20 17:46:52.367497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:10.787 [2024-11-20 17:46:52.367525] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.374084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.374103] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.374118] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374124] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.374141] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:10.788 [2024-11-20 17:46:52.374151] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:10.788 [2024-11-20 17:46:52.374158] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:10.788 [2024-11-20 17:46:52.374181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.374203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.374233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.374318] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.374326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.374330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374335] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.374342] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:10.788 [2024-11-20 17:46:52.374350] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:10.788 [2024-11-20 17:46:52.374359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374364] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374368] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.374376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.374397] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.374461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.374468] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.374472] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374477] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.374483] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:10.788 [2024-11-20 17:46:52.374493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:10.788 [2024-11-20 17:46:52.374501] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374506] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.374517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.374537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.374595] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.374602] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.374606] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374610] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.374616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:10.788 [2024-11-20 17:46:52.374627] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374632] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374636] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.374644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.374663] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.374722] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.374729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.374733] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374737] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.374743] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:10.788 [2024-11-20 17:46:52.374748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:10.788 [2024-11-20 17:46:52.374757] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:10.788 [2024-11-20 17:46:52.374863] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:10.788 [2024-11-20 17:46:52.374868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:10.788 [2024-11-20 17:46:52.374879] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374884] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.374887] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.374905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.374927] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.374994] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.375001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.375005] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375009] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.375015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:10.788 [2024-11-20 17:46:52.375026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375031] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375035] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.375081] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.375151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.375158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.375162] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375167] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.375172] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:10.788 [2024-11-20 17:46:52.375178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375186] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:10.788 [2024-11-20 17:46:52.375204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375216] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.375251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.375379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.788 [2024-11-20 17:46:52.375387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.788 [2024-11-20 17:46:52.375391] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375396] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=4096, cccid=0 00:26:10.788 [2024-11-20 17:46:52.375401] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23bfcc0) on tqpair(0x2379a80): expected_datao=0, payload_size=4096 00:26:10.788 [2024-11-20 17:46:52.375406] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375416] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375421] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375430] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.375436] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.375440] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375444] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.375454] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:10.788 [2024-11-20 17:46:52.375460] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:10.788 [2024-11-20 17:46:52.375465] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:10.788 [2024-11-20 17:46:52.375470] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:10.788 [2024-11-20 17:46:52.375476] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:10.788 [2024-11-20 17:46:52.375481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375503] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375507] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:10.788 [2024-11-20 17:46:52.375535] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.375608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.375615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.375619] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.375632] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375637] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375641] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.788 [2024-11-20 17:46:52.375655] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375659] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.788 [2024-11-20 17:46:52.375676] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375680] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375684] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.788 [2024-11-20 17:46:52.375697] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.788 [2024-11-20 17:46:52.375716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375739] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375743] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.375772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfcc0, cid 0, qid 0 00:26:10.788 [2024-11-20 17:46:52.375780] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bfe40, cid 1, qid 0 00:26:10.788 [2024-11-20 17:46:52.375786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23bffc0, cid 2, qid 0 00:26:10.788 [2024-11-20 17:46:52.375791] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.788 [2024-11-20 17:46:52.375797] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c02c0, cid 4, qid 0 00:26:10.788 [2024-11-20 17:46:52.375904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.375911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.375915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375919] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c02c0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.375926] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:10.788 [2024-11-20 17:46:52.375931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375953] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.375961] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375965] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.375969] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.375977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:10.788 [2024-11-20 17:46:52.375996] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c02c0, cid 4, qid 0 00:26:10.788 [2024-11-20 17:46:52.376082] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.376091] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.376095] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376100] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c02c0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.376169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.376181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.376190] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376194] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.376202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.376225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c02c0, cid 4, qid 0 00:26:10.788 [2024-11-20 17:46:52.376321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.788 [2024-11-20 17:46:52.376328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.788 [2024-11-20 17:46:52.376332] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376336] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=4096, cccid=4 00:26:10.788 [2024-11-20 17:46:52.376341] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c02c0) on tqpair(0x2379a80): expected_datao=0, payload_size=4096 00:26:10.788 [2024-11-20 17:46:52.376346] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376354] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376358] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.788 [2024-11-20 17:46:52.376373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.788 [2024-11-20 17:46:52.376377] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376381] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c02c0) on tqpair=0x2379a80 00:26:10.788 [2024-11-20 17:46:52.376393] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:10.788 [2024-11-20 17:46:52.376408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.376420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:10.788 [2024-11-20 17:46:52.376428] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376433] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379a80) 00:26:10.788 [2024-11-20 17:46:52.376440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.788 [2024-11-20 17:46:52.376461] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c02c0, cid 4, qid 0 00:26:10.788 [2024-11-20 17:46:52.376564] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.788 [2024-11-20 17:46:52.376571] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.788 [2024-11-20 17:46:52.376575] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.788 [2024-11-20 17:46:52.376579] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=4096, cccid=4 00:26:10.788 [2024-11-20 17:46:52.376584] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c02c0) on tqpair(0x2379a80): expected_datao=0, payload_size=4096 00:26:10.788 [2024-11-20 17:46:52.376588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376596] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376600] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.376615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.376618] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c02c0) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.376641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376662] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.376674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.376695] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c02c0, cid 4, qid 0 00:26:10.789 [2024-11-20 17:46:52.376773] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.789 [2024-11-20 17:46:52.376780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.789 [2024-11-20 17:46:52.376784] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376788] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=4096, cccid=4 00:26:10.789 [2024-11-20 17:46:52.376793] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c02c0) on tqpair(0x2379a80): expected_datao=0, payload_size=4096 00:26:10.789 [2024-11-20 17:46:52.376797] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376804] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376808] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376817] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.376823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.376827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c02c0) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.376841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376850] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376869] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376887] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:10.789 [2024-11-20 17:46:52.376892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:10.789 [2024-11-20 17:46:52.376897] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:10.789 [2024-11-20 17:46:52.376918] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.376930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.376938] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376942] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.376946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.376953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.789 [2024-11-20 17:46:52.376975] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c02c0, cid 4, qid 0 00:26:10.789 [2024-11-20 17:46:52.376983] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0440, cid 5, qid 0 00:26:10.789 [2024-11-20 17:46:52.377085] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.377094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.377098] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377102] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c02c0) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.377110] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.377116] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.377120] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377124] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0440) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.377136] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377141] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.377148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.377171] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0440, cid 5, qid 0 00:26:10.789 [2024-11-20 17:46:52.377240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.377248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.377252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0440) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.377266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377271] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.377278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.377296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0440, cid 5, qid 0 00:26:10.789 [2024-11-20 17:46:52.377417] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.377426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.377431] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377435] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0440) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.377446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377451] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.377459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.377481] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0440, cid 5, qid 0 00:26:10.789 [2024-11-20 17:46:52.377553] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.377560] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.377564] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377568] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0440) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.377592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.377606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.377614] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377618] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.377625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.377634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.377645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.377658] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377662] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2379a80) 00:26:10.789 [2024-11-20 17:46:52.377669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.789 [2024-11-20 17:46:52.377690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0440, cid 5, qid 0 00:26:10.789 [2024-11-20 17:46:52.377698] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c02c0, cid 4, qid 0 00:26:10.789 [2024-11-20 17:46:52.377704] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c05c0, cid 6, qid 0 00:26:10.789 [2024-11-20 17:46:52.377710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0740, cid 7, qid 0 00:26:10.789 [2024-11-20 17:46:52.377880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.789 [2024-11-20 17:46:52.377887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.789 [2024-11-20 17:46:52.377891] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377895] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=8192, cccid=5 00:26:10.789 [2024-11-20 17:46:52.377900] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c0440) on tqpair(0x2379a80): expected_datao=0, payload_size=8192 00:26:10.789 [2024-11-20 17:46:52.377905] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377923] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377928] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.789 [2024-11-20 17:46:52.377940] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.789 [2024-11-20 17:46:52.377944] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=512, cccid=4 00:26:10.789 [2024-11-20 17:46:52.377952] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c02c0) on tqpair(0x2379a80): expected_datao=0, payload_size=512 00:26:10.789 [2024-11-20 17:46:52.377960] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377967] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377970] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.789 [2024-11-20 17:46:52.377982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.789 [2024-11-20 17:46:52.377986] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.377990] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=512, cccid=6 00:26:10.789 [2024-11-20 17:46:52.377994] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c05c0) on tqpair(0x2379a80): expected_datao=0, payload_size=512 00:26:10.789 [2024-11-20 17:46:52.377999] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.378005] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.378009] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.378015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:10.789 [2024-11-20 17:46:52.378021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:10.789 [2024-11-20 17:46:52.378025] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.378029] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2379a80): datao=0, datal=4096, cccid=7 00:26:10.789 [2024-11-20 17:46:52.378033] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23c0740) on tqpair(0x2379a80): expected_datao=0, payload_size=4096 00:26:10.789 [2024-11-20 17:46:52.378038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.378045] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.378049] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.378057] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.382074] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.382090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.382096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0440) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.382121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.382130] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.382133] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.382137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c02c0) on tqpair=0x2379a80 00:26:10.789 ===================================================== 00:26:10.789 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.789 ===================================================== 00:26:10.789 Controller Capabilities/Features 00:26:10.789 ================================ 00:26:10.789 Vendor ID: 8086 00:26:10.789 Subsystem Vendor ID: 8086 00:26:10.789 Serial Number: SPDK00000000000001 00:26:10.789 Model Number: SPDK bdev Controller 00:26:10.789 Firmware Version: 24.09.1 00:26:10.789 Recommended Arb Burst: 6 00:26:10.789 IEEE OUI Identifier: e4 d2 5c 00:26:10.789 Multi-path I/O 00:26:10.789 May have multiple subsystem ports: Yes 00:26:10.789 May have multiple controllers: Yes 00:26:10.789 Associated with SR-IOV VF: No 00:26:10.789 Max Data Transfer Size: 131072 00:26:10.789 Max Number of Namespaces: 32 00:26:10.789 Max Number of I/O Queues: 127 00:26:10.789 NVMe Specification Version (VS): 1.3 00:26:10.789 NVMe Specification Version (Identify): 1.3 00:26:10.789 Maximum Queue Entries: 128 00:26:10.789 Contiguous Queues Required: Yes 00:26:10.789 Arbitration Mechanisms Supported 00:26:10.789 Weighted Round Robin: Not Supported 00:26:10.789 Vendor Specific: Not Supported 00:26:10.789 Reset Timeout: 15000 ms 00:26:10.789 Doorbell Stride: 4 bytes 00:26:10.789 NVM Subsystem Reset: Not Supported 00:26:10.789 Command Sets Supported 00:26:10.789 NVM Command Set: Supported 00:26:10.789 Boot Partition: Not Supported 00:26:10.789 Memory Page Size Minimum: 4096 bytes 00:26:10.789 Memory Page Size Maximum: 4096 bytes 00:26:10.789 Persistent Memory Region: Not Supported 00:26:10.789 Optional Asynchronous Events Supported 00:26:10.789 Namespace Attribute Notices: Supported 00:26:10.789 Firmware Activation Notices: Not Supported 00:26:10.789 ANA Change Notices: Not Supported 00:26:10.789 PLE Aggregate Log Change Notices: Not Supported 00:26:10.789 LBA Status Info Alert Notices: Not Supported 00:26:10.789 EGE Aggregate Log Change Notices: Not Supported 00:26:10.789 Normal NVM Subsystem Shutdown event: Not Supported 00:26:10.789 Zone Descriptor Change Notices: Not Supported 00:26:10.789 Discovery Log Change Notices: Not Supported 00:26:10.789 Controller Attributes 00:26:10.789 128-bit Host Identifier: Supported 00:26:10.789 Non-Operational Permissive Mode: Not Supported 00:26:10.789 NVM Sets: Not Supported 00:26:10.789 Read Recovery Levels: Not Supported 00:26:10.789 Endurance Groups: Not Supported 00:26:10.789 Predictable Latency Mode: Not Supported 00:26:10.789 Traffic Based Keep ALive: Not Supported 00:26:10.789 Namespace Granularity: Not Supported 00:26:10.789 SQ Associations: Not Supported 00:26:10.789 UUID List: Not Supported 00:26:10.789 Multi-Domain Subsystem: Not Supported 00:26:10.789 Fixed Capacity Management: Not Supported 00:26:10.789 Variable Capacity Management: Not Supported 00:26:10.789 Delete Endurance Group: Not Supported 00:26:10.789 Delete NVM Set: Not Supported 00:26:10.789 Extended LBA Formats Supported: Not Supported 00:26:10.789 Flexible Data Placement Supported: Not Supported 00:26:10.789 00:26:10.789 Controller Memory Buffer Support 00:26:10.789 ================================ 00:26:10.789 Supported: No 00:26:10.789 00:26:10.789 Persistent Memory Region Support 00:26:10.789 ================================ 00:26:10.789 Supported: No 00:26:10.789 00:26:10.789 Admin Command Set Attributes 00:26:10.789 ============================ 00:26:10.789 Security Send/Receive: Not Supported 00:26:10.789 Format NVM: Not Supported 00:26:10.789 Firmware Activate/Download: Not Supported 00:26:10.789 Namespace Management: Not Supported 00:26:10.789 Device Self-Test: Not Supported 00:26:10.789 Directives: Not Supported 00:26:10.789 NVMe-MI: Not Supported 00:26:10.789 Virtualization Management: Not Supported 00:26:10.789 Doorbell Buffer Config: Not Supported 00:26:10.789 Get LBA Status Capability: Not Supported 00:26:10.789 Command & Feature Lockdown Capability: Not Supported 00:26:10.789 Abort Command Limit: 4 00:26:10.789 Async Event Request Limit: 4 00:26:10.789 Number of Firmware Slots: N/A 00:26:10.789 Firmware Slot 1 Read-Only: N/A 00:26:10.789 Firmware Activation Without Reset: [2024-11-20 17:46:52.382162] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.382170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.382174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.382178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c05c0) on tqpair=0x2379a80 00:26:10.789 [2024-11-20 17:46:52.382185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.789 [2024-11-20 17:46:52.382192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.789 [2024-11-20 17:46:52.382195] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.789 [2024-11-20 17:46:52.382199] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0740) on tqpair=0x2379a80 00:26:10.789 N/A 00:26:10.789 Multiple Update Detection Support: N/A 00:26:10.789 Firmware Update Granularity: No Information Provided 00:26:10.789 Per-Namespace SMART Log: No 00:26:10.789 Asymmetric Namespace Access Log Page: Not Supported 00:26:10.789 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:10.789 Command Effects Log Page: Supported 00:26:10.789 Get Log Page Extended Data: Supported 00:26:10.789 Telemetry Log Pages: Not Supported 00:26:10.789 Persistent Event Log Pages: Not Supported 00:26:10.790 Supported Log Pages Log Page: May Support 00:26:10.790 Commands Supported & Effects Log Page: Not Supported 00:26:10.790 Feature Identifiers & Effects Log Page:May Support 00:26:10.790 NVMe-MI Commands & Effects Log Page: May Support 00:26:10.790 Data Area 4 for Telemetry Log: Not Supported 00:26:10.790 Error Log Page Entries Supported: 128 00:26:10.790 Keep Alive: Supported 00:26:10.790 Keep Alive Granularity: 10000 ms 00:26:10.790 00:26:10.790 NVM Command Set Attributes 00:26:10.790 ========================== 00:26:10.790 Submission Queue Entry Size 00:26:10.790 Max: 64 00:26:10.790 Min: 64 00:26:10.790 Completion Queue Entry Size 00:26:10.790 Max: 16 00:26:10.790 Min: 16 00:26:10.790 Number of Namespaces: 32 00:26:10.790 Compare Command: Supported 00:26:10.790 Write Uncorrectable Command: Not Supported 00:26:10.790 Dataset Management Command: Supported 00:26:10.790 Write Zeroes Command: Supported 00:26:10.790 Set Features Save Field: Not Supported 00:26:10.790 Reservations: Supported 00:26:10.790 Timestamp: Not Supported 00:26:10.790 Copy: Supported 00:26:10.790 Volatile Write Cache: Present 00:26:10.790 Atomic Write Unit (Normal): 1 00:26:10.790 Atomic Write Unit (PFail): 1 00:26:10.790 Atomic Compare & Write Unit: 1 00:26:10.790 Fused Compare & Write: Supported 00:26:10.790 Scatter-Gather List 00:26:10.790 SGL Command Set: Supported 00:26:10.790 SGL Keyed: Supported 00:26:10.790 SGL Bit Bucket Descriptor: Not Supported 00:26:10.790 SGL Metadata Pointer: Not Supported 00:26:10.790 Oversized SGL: Not Supported 00:26:10.790 SGL Metadata Address: Not Supported 00:26:10.790 SGL Offset: Supported 00:26:10.790 Transport SGL Data Block: Not Supported 00:26:10.790 Replay Protected Memory Block: Not Supported 00:26:10.790 00:26:10.790 Firmware Slot Information 00:26:10.790 ========================= 00:26:10.790 Active slot: 1 00:26:10.790 Slot 1 Firmware Revision: 24.09.1 00:26:10.790 00:26:10.790 00:26:10.790 Commands Supported and Effects 00:26:10.790 ============================== 00:26:10.790 Admin Commands 00:26:10.790 -------------- 00:26:10.790 Get Log Page (02h): Supported 00:26:10.790 Identify (06h): Supported 00:26:10.790 Abort (08h): Supported 00:26:10.790 Set Features (09h): Supported 00:26:10.790 Get Features (0Ah): Supported 00:26:10.790 Asynchronous Event Request (0Ch): Supported 00:26:10.790 Keep Alive (18h): Supported 00:26:10.790 I/O Commands 00:26:10.790 ------------ 00:26:10.790 Flush (00h): Supported LBA-Change 00:26:10.790 Write (01h): Supported LBA-Change 00:26:10.790 Read (02h): Supported 00:26:10.790 Compare (05h): Supported 00:26:10.790 Write Zeroes (08h): Supported LBA-Change 00:26:10.790 Dataset Management (09h): Supported LBA-Change 00:26:10.790 Copy (19h): Supported LBA-Change 00:26:10.790 00:26:10.790 Error Log 00:26:10.790 ========= 00:26:10.790 00:26:10.790 Arbitration 00:26:10.790 =========== 00:26:10.790 Arbitration Burst: 1 00:26:10.790 00:26:10.790 Power Management 00:26:10.790 ================ 00:26:10.790 Number of Power States: 1 00:26:10.790 Current Power State: Power State #0 00:26:10.790 Power State #0: 00:26:10.790 Max Power: 0.00 W 00:26:10.790 Non-Operational State: Operational 00:26:10.790 Entry Latency: Not Reported 00:26:10.790 Exit Latency: Not Reported 00:26:10.790 Relative Read Throughput: 0 00:26:10.790 Relative Read Latency: 0 00:26:10.790 Relative Write Throughput: 0 00:26:10.790 Relative Write Latency: 0 00:26:10.790 Idle Power: Not Reported 00:26:10.790 Active Power: Not Reported 00:26:10.790 Non-Operational Permissive Mode: Not Supported 00:26:10.790 00:26:10.790 Health Information 00:26:10.790 ================== 00:26:10.790 Critical Warnings: 00:26:10.790 Available Spare Space: OK 00:26:10.790 Temperature: OK 00:26:10.790 Device Reliability: OK 00:26:10.790 Read Only: No 00:26:10.790 Volatile Memory Backup: OK 00:26:10.790 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:10.790 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:10.790 Available Spare: 0% 00:26:10.790 Available Spare Threshold: 0% 00:26:10.790 Life Percentage U[2024-11-20 17:46:52.382327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382335] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.382345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.382376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0740, cid 7, qid 0 00:26:10.790 [2024-11-20 17:46:52.382460] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.382468] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.382472] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0740) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.382544] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:10.790 [2024-11-20 17:46:52.382567] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfcc0) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.382576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.790 [2024-11-20 17:46:52.382583] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bfe40) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.382588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.790 [2024-11-20 17:46:52.382593] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23bffc0) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.382598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.790 [2024-11-20 17:46:52.382603] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.382608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.790 [2024-11-20 17:46:52.382619] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382624] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382628] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.382636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.382665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.382742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.382750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.382754] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382758] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.382768] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.382784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.382806] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.382923] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.382940] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.382944] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382949] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.382955] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:10.790 [2024-11-20 17:46:52.382960] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:10.790 [2024-11-20 17:46:52.382971] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382976] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.382980] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.382988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383008] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.383093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.383130] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.383135] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383139] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.383152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.383169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383191] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.383260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.383267] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.383271] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.383286] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383291] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.383302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.383387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.383403] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.383408] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383412] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.383424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383433] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.383441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383460] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.383520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.383527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.383531] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.383556] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383565] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.383572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383590] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.383655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.383662] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.383666] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383670] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.383681] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383686] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383690] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.383698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383716] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.383781] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.383788] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.383792] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.383806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.383823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383840] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.383901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.383908] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.383912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383916] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.383927] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383932] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.383936] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.383943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.383960] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.384030] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.384047] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.384052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.384057] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.384083] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.384089] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.384093] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.384101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.384123] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.384199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.384206] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.384210] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.384214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.790 [2024-11-20 17:46:52.384225] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.384230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.384234] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.790 [2024-11-20 17:46:52.384241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.790 [2024-11-20 17:46:52.384259] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.790 [2024-11-20 17:46:52.384333] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.790 [2024-11-20 17:46:52.384340] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.790 [2024-11-20 17:46:52.384344] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.790 [2024-11-20 17:46:52.384349] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.384359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384364] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384368] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.384376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.384416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.384479] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.384486] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.384490] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384495] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.384505] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384510] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384514] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.384522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.384541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.384604] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.384616] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.384620] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.384636] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384641] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384645] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.384652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.384672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.384732] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.384739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.384743] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.384758] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384763] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384767] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.384775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.384793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.384856] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.384863] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.384867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.384882] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384887] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384891] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.384898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.384916] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.384979] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.384986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.384991] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.384995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385010] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385040] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.385119] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.385128] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.385132] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385153] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385157] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385185] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.385254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.385261] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.385265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385314] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.385390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.385398] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.385402] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385406] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385416] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385450] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.385507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.385514] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.385518] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385522] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385533] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.385631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.385643] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.385647] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385652] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385663] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385668] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385672] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.385758] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.385766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.385770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385774] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385784] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385789] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385793] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385819] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.385880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.385887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.385891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.385906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.385915] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.385922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.385940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.386010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.386017] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.386021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.386025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.386035] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.386040] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.386044] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.386052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.390077] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.390102] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.390110] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.390114] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.390119] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.390133] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.390139] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.390143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2379a80) 00:26:10.791 [2024-11-20 17:46:52.390152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.791 [2024-11-20 17:46:52.390178] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23c0140, cid 3, qid 0 00:26:10.791 [2024-11-20 17:46:52.390249] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:10.791 [2024-11-20 17:46:52.390256] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:10.791 [2024-11-20 17:46:52.390260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:10.791 [2024-11-20 17:46:52.390264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23c0140) on tqpair=0x2379a80 00:26:10.791 [2024-11-20 17:46:52.390273] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:10.791 sed: 0% 00:26:10.791 Data Units Read: 0 00:26:10.791 Data Units Written: 0 00:26:10.791 Host Read Commands: 0 00:26:10.791 Host Write Commands: 0 00:26:10.791 Controller Busy Time: 0 minutes 00:26:10.791 Power Cycles: 0 00:26:10.791 Power On Hours: 0 hours 00:26:10.791 Unsafe Shutdowns: 0 00:26:10.791 Unrecoverable Media Errors: 0 00:26:10.791 Lifetime Error Log Entries: 0 00:26:10.791 Warning Temperature Time: 0 minutes 00:26:10.791 Critical Temperature Time: 0 minutes 00:26:10.791 00:26:10.791 Number of Queues 00:26:10.791 ================ 00:26:10.791 Number of I/O Submission Queues: 127 00:26:10.791 Number of I/O Completion Queues: 127 00:26:10.791 00:26:10.791 Active Namespaces 00:26:10.791 ================= 00:26:10.791 Namespace ID:1 00:26:10.791 Error Recovery Timeout: Unlimited 00:26:10.791 Command Set Identifier: NVM (00h) 00:26:10.791 Deallocate: Supported 00:26:10.791 Deallocated/Unwritten Error: Not Supported 00:26:10.791 Deallocated Read Value: Unknown 00:26:10.791 Deallocate in Write Zeroes: Not Supported 00:26:10.791 Deallocated Guard Field: 0xFFFF 00:26:10.791 Flush: Supported 00:26:10.791 Reservation: Supported 00:26:10.791 Namespace Sharing Capabilities: Multiple Controllers 00:26:10.791 Size (in LBAs): 131072 (0GiB) 00:26:10.791 Capacity (in LBAs): 131072 (0GiB) 00:26:10.791 Utilization (in LBAs): 131072 (0GiB) 00:26:10.791 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:10.791 EUI64: ABCDEF0123456789 00:26:10.791 UUID: a796bb2d-8529-4cbb-b936-3284e3db3ada 00:26:10.791 Thin Provisioning: Not Supported 00:26:10.791 Per-NS Atomic Units: Yes 00:26:10.791 Atomic Boundary Size (Normal): 0 00:26:10.791 Atomic Boundary Size (PFail): 0 00:26:10.791 Atomic Boundary Offset: 0 00:26:10.791 Maximum Single Source Range Length: 65535 00:26:10.791 Maximum Copy Length: 65535 00:26:10.791 Maximum Source Range Count: 1 00:26:10.791 NGUID/EUI64 Never Reused: No 00:26:10.791 Namespace Write Protected: No 00:26:10.791 Number of LBA Formats: 1 00:26:10.791 Current LBA Format: LBA Format #00 00:26:10.791 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:10.791 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.791 rmmod nvme_tcp 00:26:10.791 rmmod nvme_fabrics 00:26:10.791 rmmod nvme_keyring 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 107120 ']' 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 107120 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 107120 ']' 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 107120 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.791 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 107120 00:26:11.050 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:11.050 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:11.050 killing process with pid 107120 00:26:11.050 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 107120' 00:26:11.050 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 107120 00:26:11.050 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 107120 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:11.309 17:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:11.309 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:11.309 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:11.309 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:11.309 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:11.309 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:11.309 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:26:11.568 ************************************ 00:26:11.568 END TEST nvmf_identify 00:26:11.568 ************************************ 00:26:11.568 00:26:11.568 real 0m3.249s 00:26:11.568 user 0m8.161s 00:26:11.568 sys 0m0.895s 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.568 ************************************ 00:26:11.568 START TEST nvmf_perf 00:26:11.568 ************************************ 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:11.568 * Looking for test storage... 00:26:11.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:26:11.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.827 --rc genhtml_branch_coverage=1 00:26:11.827 --rc genhtml_function_coverage=1 00:26:11.827 --rc genhtml_legend=1 00:26:11.827 --rc geninfo_all_blocks=1 00:26:11.827 --rc geninfo_unexecuted_blocks=1 00:26:11.827 00:26:11.827 ' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.827 --rc genhtml_branch_coverage=1 00:26:11.827 --rc genhtml_function_coverage=1 00:26:11.827 --rc genhtml_legend=1 00:26:11.827 --rc geninfo_all_blocks=1 00:26:11.827 --rc geninfo_unexecuted_blocks=1 00:26:11.827 00:26:11.827 ' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.827 --rc genhtml_branch_coverage=1 00:26:11.827 --rc genhtml_function_coverage=1 00:26:11.827 --rc genhtml_legend=1 00:26:11.827 --rc geninfo_all_blocks=1 00:26:11.827 --rc geninfo_unexecuted_blocks=1 00:26:11.827 00:26:11.827 ' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.827 --rc genhtml_branch_coverage=1 00:26:11.827 --rc genhtml_function_coverage=1 00:26:11.827 --rc genhtml_legend=1 00:26:11.827 --rc geninfo_all_blocks=1 00:26:11.827 --rc geninfo_unexecuted_blocks=1 00:26:11.827 00:26:11.827 ' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.827 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:11.828 Cannot find device "nvmf_init_br" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:11.828 Cannot find device "nvmf_init_br2" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:11.828 Cannot find device "nvmf_tgt_br" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:11.828 Cannot find device "nvmf_tgt_br2" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:11.828 Cannot find device "nvmf_init_br" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:11.828 Cannot find device "nvmf_init_br2" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:11.828 Cannot find device "nvmf_tgt_br" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:11.828 Cannot find device "nvmf_tgt_br2" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:11.828 Cannot find device "nvmf_br" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:11.828 Cannot find device "nvmf_init_if" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:11.828 Cannot find device "nvmf_init_if2" 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:11.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:11.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:11.828 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:12.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:12.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:26:12.101 00:26:12.101 --- 10.0.0.3 ping statistics --- 00:26:12.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.101 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:12.101 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:12.101 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:26:12.101 00:26:12.101 --- 10.0.0.4 ping statistics --- 00:26:12.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.101 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:12.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:12.101 00:26:12.101 --- 10.0.0.1 ping statistics --- 00:26:12.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.101 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:12.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:26:12.101 00:26:12.101 --- 10.0.0.2 ping statistics --- 00:26:12.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.101 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:12.101 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=107399 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 107399 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 107399 ']' 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.399 17:46:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:12.399 [2024-11-20 17:46:53.994687] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:12.399 [2024-11-20 17:46:53.994836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.399 [2024-11-20 17:46:54.149841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.658 [2024-11-20 17:46:54.247075] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.658 [2024-11-20 17:46:54.247140] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.658 [2024-11-20 17:46:54.247154] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.658 [2024-11-20 17:46:54.247165] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.658 [2024-11-20 17:46:54.247174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.658 [2024-11-20 17:46:54.247358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.658 [2024-11-20 17:46:54.247515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.658 [2024-11-20 17:46:54.248109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.658 [2024-11-20 17:46:54.248117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.225 17:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.225 17:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:26:13.225 17:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:13.225 17:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.225 17:46:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:13.484 17:46:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.484 17:46:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:13.484 17:46:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:13.742 17:46:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:26:13.742 17:46:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:14.308 17:46:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:26:14.308 17:46:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:14.566 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:14.566 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:26:14.566 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:14.566 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:14.566 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:14.825 [2024-11-20 17:46:56.360247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.825 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.083 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:15.083 17:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.341 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:15.341 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:15.599 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:15.857 [2024-11-20 17:46:57.594614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:15.857 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:16.423 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:16.423 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:16.423 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:16.423 17:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:17.357 Initializing NVMe Controllers 00:26:17.357 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:17.357 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:17.357 Initialization complete. Launching workers. 00:26:17.357 ======================================================== 00:26:17.357 Latency(us) 00:26:17.357 Device Information : IOPS MiB/s Average min max 00:26:17.357 PCIE (0000:00:10.0) NSID 1 from core 0: 23518.64 91.87 1360.73 365.63 6060.97 00:26:17.357 ======================================================== 00:26:17.357 Total : 23518.64 91.87 1360.73 365.63 6060.97 00:26:17.357 00:26:17.357 17:46:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:18.729 Initializing NVMe Controllers 00:26:18.729 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:18.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:18.729 Initialization complete. Launching workers. 00:26:18.729 ======================================================== 00:26:18.729 Latency(us) 00:26:18.729 Device Information : IOPS MiB/s Average min max 00:26:18.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2868.91 11.21 348.23 131.26 7227.80 00:26:18.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.35 5028.86 12038.07 00:26:18.729 ======================================================== 00:26:18.729 Total : 2992.91 11.69 670.56 131.26 12038.07 00:26:18.729 00:26:18.729 17:47:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:20.104 Initializing NVMe Controllers 00:26:20.104 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:20.104 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:20.104 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:20.104 Initialization complete. Launching workers. 00:26:20.104 ======================================================== 00:26:20.104 Latency(us) 00:26:20.104 Device Information : IOPS MiB/s Average min max 00:26:20.104 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8003.37 31.26 3999.16 738.37 9381.65 00:26:20.104 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2682.06 10.48 12139.30 7270.15 24010.64 00:26:20.104 ======================================================== 00:26:20.104 Total : 10685.43 41.74 6042.34 738.37 24010.64 00:26:20.104 00:26:20.104 17:47:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:20.104 17:47:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:22.639 Initializing NVMe Controllers 00:26:22.639 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.639 Controller IO queue size 128, less than required. 00:26:22.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.639 Controller IO queue size 128, less than required. 00:26:22.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.639 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.639 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:22.639 Initialization complete. Launching workers. 00:26:22.639 ======================================================== 00:26:22.639 Latency(us) 00:26:22.639 Device Information : IOPS MiB/s Average min max 00:26:22.639 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1392.64 348.16 93332.96 53785.10 210881.12 00:26:22.639 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 456.56 114.14 306276.25 103359.28 692899.32 00:26:22.639 ======================================================== 00:26:22.639 Total : 1849.20 462.30 145908.00 53785.10 692899.32 00:26:22.639 00:26:22.897 17:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:26:22.897 Initializing NVMe Controllers 00:26:22.897 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.897 Controller IO queue size 128, less than required. 00:26:22.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.897 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:22.897 Controller IO queue size 128, less than required. 00:26:22.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.897 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:22.897 WARNING: Some requested NVMe devices were skipped 00:26:22.897 No valid NVMe controllers or AIO or URING devices found 00:26:23.154 17:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:26:25.685 Initializing NVMe Controllers 00:26:25.686 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.686 Controller IO queue size 128, less than required. 00:26:25.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:25.686 Controller IO queue size 128, less than required. 00:26:25.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:25.686 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:25.686 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:25.686 Initialization complete. Launching workers. 00:26:25.686 00:26:25.686 ==================== 00:26:25.686 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:25.686 TCP transport: 00:26:25.686 polls: 6741 00:26:25.686 idle_polls: 4100 00:26:25.686 sock_completions: 2641 00:26:25.686 nvme_completions: 4707 00:26:25.686 submitted_requests: 7030 00:26:25.686 queued_requests: 1 00:26:25.686 00:26:25.686 ==================== 00:26:25.686 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:25.686 TCP transport: 00:26:25.686 polls: 7171 00:26:25.686 idle_polls: 4574 00:26:25.686 sock_completions: 2597 00:26:25.686 nvme_completions: 5365 00:26:25.686 submitted_requests: 8134 00:26:25.686 queued_requests: 1 00:26:25.686 ======================================================== 00:26:25.686 Latency(us) 00:26:25.686 Device Information : IOPS MiB/s Average min max 00:26:25.686 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1174.01 293.50 112981.14 73397.40 196893.77 00:26:25.686 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1338.16 334.54 96308.37 45208.29 135586.05 00:26:25.686 ======================================================== 00:26:25.686 Total : 2512.17 628.04 104100.03 45208.29 196893.77 00:26:25.686 00:26:25.686 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:25.686 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.944 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:25.944 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:26:25.944 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:26.203 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=7cdca21a-ffd7-46bb-9c5b-4061e576c8ea 00:26:26.203 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 7cdca21a-ffd7-46bb-9c5b-4061e576c8ea 00:26:26.203 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=7cdca21a-ffd7-46bb-9c5b-4061e576c8ea 00:26:26.203 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:26:26.203 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:26:26.203 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:26:26.203 17:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:26:26.461 { 00:26:26.461 "base_bdev": "Nvme0n1", 00:26:26.461 "block_size": 4096, 00:26:26.461 "cluster_size": 4194304, 00:26:26.461 "free_clusters": 1278, 00:26:26.461 "name": "lvs_0", 00:26:26.461 "total_data_clusters": 1278, 00:26:26.461 "uuid": "7cdca21a-ffd7-46bb-9c5b-4061e576c8ea" 00:26:26.461 } 00:26:26.461 ]' 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7cdca21a-ffd7-46bb-9c5b-4061e576c8ea") .free_clusters' 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="7cdca21a-ffd7-46bb-9c5b-4061e576c8ea") .cluster_size' 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:26:26.461 5112 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:26:26.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7cdca21a-ffd7-46bb-9c5b-4061e576c8ea lbd_0 5112 00:26:27.028 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1dcd1d73-573e-4bfc-8a69-f5631ddb018a 00:26:27.028 17:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1dcd1d73-573e-4bfc-8a69-f5631ddb018a lvs_n_0 00:26:27.285 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=efee80bf-c85e-4a72-a9fb-fd3abb703a66 00:26:27.285 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb efee80bf-c85e-4a72-a9fb-fd3abb703a66 00:26:27.286 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=efee80bf-c85e-4a72-a9fb-fd3abb703a66 00:26:27.286 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:26:27.286 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:26:27.286 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:26:27.286 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:27.544 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:26:27.544 { 00:26:27.544 "base_bdev": "Nvme0n1", 00:26:27.544 "block_size": 4096, 00:26:27.544 "cluster_size": 4194304, 00:26:27.544 "free_clusters": 0, 00:26:27.544 "name": "lvs_0", 00:26:27.544 "total_data_clusters": 1278, 00:26:27.544 "uuid": "7cdca21a-ffd7-46bb-9c5b-4061e576c8ea" 00:26:27.544 }, 00:26:27.544 { 00:26:27.544 "base_bdev": "1dcd1d73-573e-4bfc-8a69-f5631ddb018a", 00:26:27.544 "block_size": 4096, 00:26:27.544 "cluster_size": 4194304, 00:26:27.544 "free_clusters": 1276, 00:26:27.544 "name": "lvs_n_0", 00:26:27.544 "total_data_clusters": 1276, 00:26:27.544 "uuid": "efee80bf-c85e-4a72-a9fb-fd3abb703a66" 00:26:27.544 } 00:26:27.544 ]' 00:26:27.544 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="efee80bf-c85e-4a72-a9fb-fd3abb703a66") .free_clusters' 00:26:27.803 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:26:27.803 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="efee80bf-c85e-4a72-a9fb-fd3abb703a66") .cluster_size' 00:26:27.803 5104 00:26:27.803 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:26:27.803 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:26:27.803 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:26:27.803 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:26:27.803 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u efee80bf-c85e-4a72-a9fb-fd3abb703a66 lbd_nest_0 5104 00:26:28.062 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ef8a4b4b-0843-432e-b36c-13754dc5da6f 00:26:28.062 17:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.320 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:28.320 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ef8a4b4b-0843-432e-b36c-13754dc5da6f 00:26:28.579 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:28.838 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:28.838 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:28.838 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:28.838 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:28.838 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:29.406 Initializing NVMe Controllers 00:26:29.406 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:29.406 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:29.406 WARNING: Some requested NVMe devices were skipped 00:26:29.406 No valid NVMe controllers or AIO or URING devices found 00:26:29.406 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:29.406 17:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:39.470 Initializing NVMe Controllers 00:26:39.470 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.470 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:39.470 Initialization complete. Launching workers. 00:26:39.470 ======================================================== 00:26:39.470 Latency(us) 00:26:39.470 Device Information : IOPS MiB/s Average min max 00:26:39.470 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 886.20 110.77 1127.93 384.43 5630.25 00:26:39.470 ======================================================== 00:26:39.470 Total : 886.20 110.77 1127.93 384.43 5630.25 00:26:39.470 00:26:39.470 17:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:39.470 17:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:39.470 17:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:39.728 Initializing NVMe Controllers 00:26:39.728 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.728 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:39.728 WARNING: Some requested NVMe devices were skipped 00:26:39.728 No valid NVMe controllers or AIO or URING devices found 00:26:39.987 17:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:39.987 17:47:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:52.186 Initializing NVMe Controllers 00:26:52.186 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.186 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.186 Initialization complete. Launching workers. 00:26:52.186 ======================================================== 00:26:52.186 Latency(us) 00:26:52.187 Device Information : IOPS MiB/s Average min max 00:26:52.187 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1145.90 143.24 27956.57 8011.61 79590.34 00:26:52.187 ======================================================== 00:26:52.187 Total : 1145.90 143.24 27956.57 8011.61 79590.34 00:26:52.187 00:26:52.187 17:47:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:52.187 17:47:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:52.187 17:47:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:52.187 Initializing NVMe Controllers 00:26:52.187 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.187 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:52.187 WARNING: Some requested NVMe devices were skipped 00:26:52.187 No valid NVMe controllers or AIO or URING devices found 00:26:52.187 17:47:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:52.187 17:47:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:02.204 Initializing NVMe Controllers 00:27:02.204 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.204 Controller IO queue size 128, less than required. 00:27:02.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.204 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.204 Initialization complete. Launching workers. 00:27:02.204 ======================================================== 00:27:02.204 Latency(us) 00:27:02.204 Device Information : IOPS MiB/s Average min max 00:27:02.204 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3312.86 414.11 38683.20 14974.90 135258.28 00:27:02.204 ======================================================== 00:27:02.204 Total : 3312.86 414.11 38683.20 14974.90 135258.28 00:27:02.204 00:27:02.204 17:47:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.204 17:47:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ef8a4b4b-0843-432e-b36c-13754dc5da6f 00:27:02.204 17:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:02.204 17:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1dcd1d73-573e-4bfc-8a69-f5631ddb018a 00:27:02.204 17:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.462 rmmod nvme_tcp 00:27:02.462 rmmod nvme_fabrics 00:27:02.462 rmmod nvme_keyring 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 107399 ']' 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 107399 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 107399 ']' 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 107399 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 107399 00:27:02.462 killing process with pid 107399 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 107399' 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 107399 00:27:02.462 17:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 107399 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:03.935 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.936 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.936 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.936 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:27:03.936 00:27:03.936 real 0m52.441s 00:27:03.936 user 3m17.036s 00:27:03.936 sys 0m11.017s 00:27:03.936 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.936 ************************************ 00:27:03.936 17:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:03.936 END TEST nvmf_perf 00:27:03.936 ************************************ 00:27:04.194 17:47:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:04.194 17:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.195 ************************************ 00:27:04.195 START TEST nvmf_fio_host 00:27:04.195 ************************************ 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:04.195 * Looking for test storage... 00:27:04.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.195 --rc genhtml_branch_coverage=1 00:27:04.195 --rc genhtml_function_coverage=1 00:27:04.195 --rc genhtml_legend=1 00:27:04.195 --rc geninfo_all_blocks=1 00:27:04.195 --rc geninfo_unexecuted_blocks=1 00:27:04.195 00:27:04.195 ' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.195 --rc genhtml_branch_coverage=1 00:27:04.195 --rc genhtml_function_coverage=1 00:27:04.195 --rc genhtml_legend=1 00:27:04.195 --rc geninfo_all_blocks=1 00:27:04.195 --rc geninfo_unexecuted_blocks=1 00:27:04.195 00:27:04.195 ' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.195 --rc genhtml_branch_coverage=1 00:27:04.195 --rc genhtml_function_coverage=1 00:27:04.195 --rc genhtml_legend=1 00:27:04.195 --rc geninfo_all_blocks=1 00:27:04.195 --rc geninfo_unexecuted_blocks=1 00:27:04.195 00:27:04.195 ' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.195 --rc genhtml_branch_coverage=1 00:27:04.195 --rc genhtml_function_coverage=1 00:27:04.195 --rc genhtml_legend=1 00:27:04.195 --rc geninfo_all_blocks=1 00:27:04.195 --rc geninfo_unexecuted_blocks=1 00:27:04.195 00:27:04.195 ' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.195 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:04.196 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:04.456 Cannot find device "nvmf_init_br" 00:27:04.456 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:27:04.456 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:04.456 Cannot find device "nvmf_init_br2" 00:27:04.456 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:27:04.456 17:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:04.456 Cannot find device "nvmf_tgt_br" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:04.456 Cannot find device "nvmf_tgt_br2" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:04.456 Cannot find device "nvmf_init_br" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:04.456 Cannot find device "nvmf_init_br2" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:04.456 Cannot find device "nvmf_tgt_br" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:04.456 Cannot find device "nvmf_tgt_br2" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:04.456 Cannot find device "nvmf_br" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:04.456 Cannot find device "nvmf_init_if" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:04.456 Cannot find device "nvmf_init_if2" 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:04.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:04.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:04.456 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:04.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:04.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:27:04.715 00:27:04.715 --- 10.0.0.3 ping statistics --- 00:27:04.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.715 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:04.715 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:04.715 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:27:04.715 00:27:04.715 --- 10.0.0.4 ping statistics --- 00:27:04.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.715 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:04.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:27:04.715 00:27:04.715 --- 10.0.0.1 ping statistics --- 00:27:04.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.715 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:04.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:27:04.715 00:27:04.715 --- 10.0.0.2 ping statistics --- 00:27:04.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.715 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:04.715 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=108417 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:04.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 108417 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 108417 ']' 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.716 17:47:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.716 [2024-11-20 17:47:46.465034] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:04.716 [2024-11-20 17:47:46.465408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.975 [2024-11-20 17:47:46.608650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.975 [2024-11-20 17:47:46.744293] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.975 [2024-11-20 17:47:46.744358] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.975 [2024-11-20 17:47:46.744373] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.975 [2024-11-20 17:47:46.744384] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.975 [2024-11-20 17:47:46.744393] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.975 [2024-11-20 17:47:46.744502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.975 [2024-11-20 17:47:46.744777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.975 [2024-11-20 17:47:46.745245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:04.975 [2024-11-20 17:47:46.745269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.910 17:47:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.910 17:47:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:27:05.910 17:47:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:06.168 [2024-11-20 17:47:47.797754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.168 17:47:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:06.168 17:47:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.168 17:47:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.169 17:47:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:06.735 Malloc1 00:27:06.735 17:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.993 17:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:07.252 17:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:07.509 [2024-11-20 17:47:49.100883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:07.509 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:07.767 17:47:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:08.027 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:08.027 fio-3.35 00:27:08.027 Starting 1 thread 00:27:10.562 00:27:10.562 test: (groupid=0, jobs=1): err= 0: pid=108548: Wed Nov 20 17:47:51 2024 00:27:10.562 read: IOPS=7904, BW=30.9MiB/s (32.4MB/s)(62.0MiB/2007msec) 00:27:10.562 slat (usec): min=2, max=3534, avg= 3.15, stdev=28.21 00:27:10.562 clat (usec): min=2887, max=18669, avg=8512.93, stdev=840.63 00:27:10.562 lat (usec): min=2927, max=18671, avg=8516.08, stdev=841.62 00:27:10.562 clat percentiles (usec): 00:27:10.562 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7832], 20.00th=[ 8029], 00:27:10.562 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:27:10.562 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:27:10.562 | 99.00th=[10290], 99.50th=[13173], 99.90th=[17957], 99.95th=[18482], 00:27:10.562 | 99.99th=[18744] 00:27:10.562 bw ( KiB/s): min=30728, max=32024, per=99.90%, avg=31588.00, stdev=594.71, samples=4 00:27:10.562 iops : min= 7682, max= 8006, avg=7897.00, stdev=148.68, samples=4 00:27:10.562 write: IOPS=7878, BW=30.8MiB/s (32.3MB/s)(61.8MiB/2007msec); 0 zone resets 00:27:10.562 slat (usec): min=2, max=241, avg= 3.03, stdev= 3.30 00:27:10.562 clat (usec): min=2169, max=13783, avg=7642.00, stdev=610.49 00:27:10.562 lat (usec): min=2181, max=13786, avg=7645.03, stdev=610.38 00:27:10.562 clat percentiles (usec): 00:27:10.562 | 1.00th=[ 6325], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:27:10.562 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:27:10.562 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:27:10.562 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[12649], 99.95th=[12780], 00:27:10.562 | 99.99th=[13698] 00:27:10.562 bw ( KiB/s): min=31160, max=31680, per=99.99%, avg=31512.00, stdev=241.15, samples=4 00:27:10.562 iops : min= 7790, max= 7920, avg=7878.00, stdev=60.29, samples=4 00:27:10.562 lat (msec) : 4=0.13%, 10=98.79%, 20=1.08% 00:27:10.562 cpu : usr=65.80%, sys=25.07%, ctx=37, majf=0, minf=8 00:27:10.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:10.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:10.562 issued rwts: total=15865,15812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:10.562 00:27:10.562 Run status group 0 (all jobs): 00:27:10.562 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=62.0MiB (65.0MB), run=2007-2007msec 00:27:10.562 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.8MiB (64.8MB), run=2007-2007msec 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:10.562 17:47:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:27:10.562 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:10.562 fio-3.35 00:27:10.562 Starting 1 thread 00:27:13.109 00:27:13.109 test: (groupid=0, jobs=1): err= 0: pid=108591: Wed Nov 20 17:47:54 2024 00:27:13.109 read: IOPS=7864, BW=123MiB/s (129MB/s)(247MiB/2010msec) 00:27:13.109 slat (usec): min=3, max=117, avg= 3.79, stdev= 1.98 00:27:13.109 clat (usec): min=2677, max=20021, avg=9442.44, stdev=2312.96 00:27:13.109 lat (usec): min=2680, max=20025, avg=9446.22, stdev=2313.06 00:27:13.109 clat percentiles (usec): 00:27:13.109 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7373], 00:27:13.109 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10159], 00:27:13.109 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12256], 95.00th=[13304], 00:27:13.109 | 99.00th=[15533], 99.50th=[15926], 99.90th=[18220], 99.95th=[19792], 00:27:13.109 | 99.99th=[20055] 00:27:13.109 bw ( KiB/s): min=56672, max=73312, per=52.44%, avg=65992.00, stdev=6983.79, samples=4 00:27:13.109 iops : min= 3542, max= 4582, avg=4124.50, stdev=436.49, samples=4 00:27:13.109 write: IOPS=4729, BW=73.9MiB/s (77.5MB/s)(135MiB/1823msec); 0 zone resets 00:27:13.109 slat (usec): min=34, max=400, avg=39.05, stdev= 8.21 00:27:13.109 clat (usec): min=6536, max=19914, avg=11767.39, stdev=1955.88 00:27:13.109 lat (usec): min=6572, max=19951, avg=11806.44, stdev=1956.69 00:27:13.109 clat percentiles (usec): 00:27:13.109 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:27:13.109 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:27:13.109 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14484], 95.00th=[15270], 00:27:13.110 | 99.00th=[16450], 99.50th=[16909], 99.90th=[19268], 99.95th=[19530], 00:27:13.110 | 99.99th=[19792] 00:27:13.110 bw ( KiB/s): min=58880, max=75744, per=90.59%, avg=68552.00, stdev=7259.43, samples=4 00:27:13.110 iops : min= 3680, max= 4734, avg=4284.50, stdev=453.71, samples=4 00:27:13.110 lat (msec) : 4=0.17%, 10=44.28%, 20=55.54%, 50=0.01% 00:27:13.110 cpu : usr=75.31%, sys=16.58%, ctx=5, majf=0, minf=5 00:27:13.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:13.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.110 issued rwts: total=15808,8622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.110 00:27:13.110 Run status group 0 (all jobs): 00:27:13.110 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=247MiB (259MB), run=2010-2010msec 00:27:13.110 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=135MiB (141MB), run=1823-1823msec 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:13.110 17:47:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:27:13.677 Nvme0n1 00:27:13.677 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:13.936 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=e8402559-377b-473a-972c-78b5a5c52560 00:27:13.936 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb e8402559-377b-473a-972c-78b5a5c52560 00:27:13.936 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e8402559-377b-473a-972c-78b5a5c52560 00:27:13.936 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:13.936 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:27:13.936 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:27:13.936 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:14.195 { 00:27:14.195 "base_bdev": "Nvme0n1", 00:27:14.195 "block_size": 4096, 00:27:14.195 "cluster_size": 1073741824, 00:27:14.195 "free_clusters": 4, 00:27:14.195 "name": "lvs_0", 00:27:14.195 "total_data_clusters": 4, 00:27:14.195 "uuid": "e8402559-377b-473a-972c-78b5a5c52560" 00:27:14.195 } 00:27:14.195 ]' 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e8402559-377b-473a-972c-78b5a5c52560") .free_clusters' 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e8402559-377b-473a-972c-78b5a5c52560") .cluster_size' 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:27:14.195 4096 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:27:14.195 17:47:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:27:14.453 a61e2fc9-bec7-422c-9682-f65c303e3e73 00:27:14.453 17:47:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:14.711 17:47:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:14.969 17:47:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:15.228 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:15.543 17:47:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:15.543 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:15.543 fio-3.35 00:27:15.543 Starting 1 thread 00:27:18.091 00:27:18.091 test: (groupid=0, jobs=1): err= 0: pid=108751: Wed Nov 20 17:47:59 2024 00:27:18.091 read: IOPS=6361, BW=24.8MiB/s (26.1MB/s)(49.9MiB/2008msec) 00:27:18.091 slat (usec): min=2, max=325, avg= 2.50, stdev= 3.68 00:27:18.091 clat (usec): min=3989, max=18106, avg=10559.00, stdev=891.13 00:27:18.091 lat (usec): min=3999, max=18108, avg=10561.49, stdev=890.89 00:27:18.091 clat percentiles (usec): 00:27:18.091 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:27:18.091 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:27:18.091 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:27:18.091 | 99.00th=[12649], 99.50th=[13042], 99.90th=[16057], 99.95th=[16909], 00:27:18.091 | 99.99th=[17171] 00:27:18.091 bw ( KiB/s): min=24742, max=25920, per=99.85%, avg=25405.50, stdev=489.64, samples=4 00:27:18.091 iops : min= 6185, max= 6480, avg=6351.25, stdev=122.64, samples=4 00:27:18.091 write: IOPS=6359, BW=24.8MiB/s (26.0MB/s)(49.9MiB/2008msec); 0 zone resets 00:27:18.091 slat (usec): min=2, max=246, avg= 2.58, stdev= 2.42 00:27:18.091 clat (usec): min=2396, max=18161, avg=9495.35, stdev=836.46 00:27:18.091 lat (usec): min=2411, max=18164, avg=9497.94, stdev=836.30 00:27:18.091 clat percentiles (usec): 00:27:18.091 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:27:18.091 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:27:18.091 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:27:18.091 | 99.00th=[11469], 99.50th=[11731], 99.90th=[15664], 99.95th=[16909], 00:27:18.091 | 99.99th=[18220] 00:27:18.091 bw ( KiB/s): min=24816, max=25940, per=99.85%, avg=25399.00, stdev=459.52, samples=4 00:27:18.092 iops : min= 6204, max= 6485, avg=6349.75, stdev=114.88, samples=4 00:27:18.092 lat (msec) : 4=0.04%, 10=50.79%, 20=49.17% 00:27:18.092 cpu : usr=73.29%, sys=20.98%, ctx=11, majf=0, minf=17 00:27:18.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:18.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.092 issued rwts: total=12773,12769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.092 00:27:18.092 Run status group 0 (all jobs): 00:27:18.092 READ: bw=24.8MiB/s (26.1MB/s), 24.8MiB/s-24.8MiB/s (26.1MB/s-26.1MB/s), io=49.9MiB (52.3MB), run=2008-2008msec 00:27:18.092 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.9MiB (52.3MB), run=2008-2008msec 00:27:18.092 17:47:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:18.092 17:47:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:18.349 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=123a4016-bb36-42c4-9035-870c55f5b586 00:27:18.349 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 123a4016-bb36-42c4-9035-870c55f5b586 00:27:18.349 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=123a4016-bb36-42c4-9035-870c55f5b586 00:27:18.349 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:18.349 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:27:18.349 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:27:18.349 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:18.607 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:18.607 { 00:27:18.607 "base_bdev": "Nvme0n1", 00:27:18.607 "block_size": 4096, 00:27:18.607 "cluster_size": 1073741824, 00:27:18.607 "free_clusters": 0, 00:27:18.607 "name": "lvs_0", 00:27:18.607 "total_data_clusters": 4, 00:27:18.607 "uuid": "e8402559-377b-473a-972c-78b5a5c52560" 00:27:18.607 }, 00:27:18.607 { 00:27:18.607 "base_bdev": "a61e2fc9-bec7-422c-9682-f65c303e3e73", 00:27:18.607 "block_size": 4096, 00:27:18.607 "cluster_size": 4194304, 00:27:18.607 "free_clusters": 1022, 00:27:18.607 "name": "lvs_n_0", 00:27:18.607 "total_data_clusters": 1022, 00:27:18.607 "uuid": "123a4016-bb36-42c4-9035-870c55f5b586" 00:27:18.607 } 00:27:18.607 ]' 00:27:18.607 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="123a4016-bb36-42c4-9035-870c55f5b586") .free_clusters' 00:27:18.866 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:27:18.866 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="123a4016-bb36-42c4-9035-870c55f5b586") .cluster_size' 00:27:18.866 4088 00:27:18.866 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:18.866 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:27:18.866 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:27:18.866 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:27:19.125 ee5b604a-3728-4782-af65-415b8ba39f6a 00:27:19.125 17:48:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:19.384 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:19.642 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:19.901 17:48:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:20.168 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:20.168 fio-3.35 00:27:20.168 Starting 1 thread 00:27:22.698 00:27:22.698 test: (groupid=0, jobs=1): err= 0: pid=108876: Wed Nov 20 17:48:04 2024 00:27:22.698 read: IOPS=5616, BW=21.9MiB/s (23.0MB/s)(44.1MiB/2009msec) 00:27:22.698 slat (usec): min=2, max=528, avg= 2.81, stdev= 6.15 00:27:22.698 clat (usec): min=4833, max=20434, avg=12039.95, stdev=1050.68 00:27:22.698 lat (usec): min=4844, max=20437, avg=12042.76, stdev=1050.26 00:27:22.698 clat percentiles (usec): 00:27:22.698 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:27:22.698 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:27:22.698 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:27:22.698 | 99.00th=[14615], 99.50th=[15008], 99.90th=[18744], 99.95th=[20055], 00:27:22.698 | 99.99th=[20317] 00:27:22.698 bw ( KiB/s): min=21264, max=23024, per=99.81%, avg=22424.00, stdev=787.91, samples=4 00:27:22.698 iops : min= 5316, max= 5756, avg=5606.00, stdev=196.98, samples=4 00:27:22.698 write: IOPS=5578, BW=21.8MiB/s (22.9MB/s)(43.8MiB/2009msec); 0 zone resets 00:27:22.698 slat (usec): min=2, max=363, avg= 2.92, stdev= 3.80 00:27:22.698 clat (usec): min=3427, max=19953, avg=10726.61, stdev=934.09 00:27:22.698 lat (usec): min=3443, max=19956, avg=10729.52, stdev=933.89 00:27:22.698 clat percentiles (usec): 00:27:22.698 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:27:22.698 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:27:22.698 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:27:22.698 | 99.00th=[12780], 99.50th=[12911], 99.90th=[17171], 99.95th=[17433], 00:27:22.698 | 99.99th=[18744] 00:27:22.698 bw ( KiB/s): min=21952, max=22704, per=99.96%, avg=22306.00, stdev=313.02, samples=4 00:27:22.698 iops : min= 5488, max= 5676, avg=5576.50, stdev=78.25, samples=4 00:27:22.698 lat (msec) : 4=0.02%, 10=10.56%, 20=89.38%, 50=0.04% 00:27:22.698 cpu : usr=71.61%, sys=22.31%, ctx=3, majf=0, minf=17 00:27:22.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:22.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:22.698 issued rwts: total=11284,11208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:22.698 00:27:22.698 Run status group 0 (all jobs): 00:27:22.698 READ: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=44.1MiB (46.2MB), run=2009-2009msec 00:27:22.698 WRITE: bw=21.8MiB/s (22.9MB/s), 21.8MiB/s-21.8MiB/s (22.9MB/s-22.9MB/s), io=43.8MiB (45.9MB), run=2009-2009msec 00:27:22.698 17:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:22.698 17:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:27:22.957 17:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:23.216 17:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:23.475 17:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:23.734 17:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:24.301 17:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.946 rmmod nvme_tcp 00:27:24.946 rmmod nvme_fabrics 00:27:24.946 rmmod nvme_keyring 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 108417 ']' 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 108417 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 108417 ']' 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 108417 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108417 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108417' 00:27:24.946 killing process with pid 108417 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 108417 00:27:24.946 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 108417 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:25.204 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:25.205 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:25.463 17:48:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.463 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:27:25.464 00:27:25.464 real 0m21.374s 00:27:25.464 user 1m31.059s 00:27:25.464 sys 0m4.882s 00:27:25.464 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:25.464 17:48:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.464 ************************************ 00:27:25.464 END TEST nvmf_fio_host 00:27:25.464 ************************************ 00:27:25.464 17:48:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:25.464 17:48:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:25.464 17:48:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:25.464 17:48:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.464 ************************************ 00:27:25.464 START TEST nvmf_failover 00:27:25.464 ************************************ 00:27:25.464 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:25.723 * Looking for test storage... 00:27:25.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.723 --rc genhtml_branch_coverage=1 00:27:25.723 --rc genhtml_function_coverage=1 00:27:25.723 --rc genhtml_legend=1 00:27:25.723 --rc geninfo_all_blocks=1 00:27:25.723 --rc geninfo_unexecuted_blocks=1 00:27:25.723 00:27:25.723 ' 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.723 --rc genhtml_branch_coverage=1 00:27:25.723 --rc genhtml_function_coverage=1 00:27:25.723 --rc genhtml_legend=1 00:27:25.723 --rc geninfo_all_blocks=1 00:27:25.723 --rc geninfo_unexecuted_blocks=1 00:27:25.723 00:27:25.723 ' 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.723 --rc genhtml_branch_coverage=1 00:27:25.723 --rc genhtml_function_coverage=1 00:27:25.723 --rc genhtml_legend=1 00:27:25.723 --rc geninfo_all_blocks=1 00:27:25.723 --rc geninfo_unexecuted_blocks=1 00:27:25.723 00:27:25.723 ' 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.723 --rc genhtml_branch_coverage=1 00:27:25.723 --rc genhtml_function_coverage=1 00:27:25.723 --rc genhtml_legend=1 00:27:25.723 --rc geninfo_all_blocks=1 00:27:25.723 --rc geninfo_unexecuted_blocks=1 00:27:25.723 00:27:25.723 ' 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:25.723 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.724 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:25.724 Cannot find device "nvmf_init_br" 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:25.724 Cannot find device "nvmf_init_br2" 00:27:25.724 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:25.725 Cannot find device "nvmf_tgt_br" 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:25.725 Cannot find device "nvmf_tgt_br2" 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:25.725 Cannot find device "nvmf_init_br" 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:25.725 Cannot find device "nvmf_init_br2" 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:25.725 Cannot find device "nvmf_tgt_br" 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:27:25.725 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:25.983 Cannot find device "nvmf_tgt_br2" 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:25.983 Cannot find device "nvmf_br" 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:25.983 Cannot find device "nvmf_init_if" 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:25.983 Cannot find device "nvmf_init_if2" 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:25.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:25.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:25.983 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:26.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:26.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.169 ms 00:27:26.241 00:27:26.241 --- 10.0.0.3 ping statistics --- 00:27:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.241 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:26.241 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:26.241 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:27:26.241 00:27:26.241 --- 10.0.0.4 ping statistics --- 00:27:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.241 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:26.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:27:26.241 00:27:26.241 --- 10.0.0.1 ping statistics --- 00:27:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.241 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:27:26.241 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:26.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:27:26.241 00:27:26.241 --- 10.0.0.2 ping statistics --- 00:27:26.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.242 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=109216 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 109216 00:27:26.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 109216 ']' 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.242 17:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:26.242 [2024-11-20 17:48:07.897160] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:26.242 [2024-11-20 17:48:07.897449] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.500 [2024-11-20 17:48:08.035504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.500 [2024-11-20 17:48:08.166206] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.500 [2024-11-20 17:48:08.166501] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.500 [2024-11-20 17:48:08.166655] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.500 [2024-11-20 17:48:08.166818] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.500 [2024-11-20 17:48:08.166862] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.500 [2024-11-20 17:48:08.167133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.500 [2024-11-20 17:48:08.167375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.500 [2024-11-20 17:48:08.167271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.435 17:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:27.435 17:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:27.435 17:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:27.435 17:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:27.435 17:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:27.435 17:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.435 17:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:27.694 [2024-11-20 17:48:09.262297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.694 17:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:27.952 Malloc0 00:27:27.952 17:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.211 17:48:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:28.472 17:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:28.738 [2024-11-20 17:48:10.423944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:28.738 17:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:28.998 [2024-11-20 17:48:10.728391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:28.998 17:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:29.255 [2024-11-20 17:48:10.992824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:27:29.255 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=109331 00:27:29.255 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:29.255 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.256 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 109331 /var/tmp/bdevperf.sock 00:27:29.256 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 109331 ']' 00:27:29.256 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:29.256 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:29.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:29.256 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:29.256 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:29.256 17:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:30.628 17:48:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:30.628 17:48:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:30.628 17:48:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:30.886 NVMe0n1 00:27:30.886 17:48:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:31.144 00:27:31.144 17:48:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=109375 00:27:31.144 17:48:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:31.144 17:48:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:32.079 17:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:32.338 [2024-11-20 17:48:14.107013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.338 [2024-11-20 17:48:14.107220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92ae90 is same with the state(6) to be set 00:27:32.596 17:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:35.881 17:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:35.881 00:27:35.881 17:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:36.139 [2024-11-20 17:48:17.850988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.139 [2024-11-20 17:48:17.851076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.139 [2024-11-20 17:48:17.851091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.139 [2024-11-20 17:48:17.851101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.139 [2024-11-20 17:48:17.851109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.139 [2024-11-20 17:48:17.851120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 [2024-11-20 17:48:17.851247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92bc40 is same with the state(6) to be set 00:27:36.140 17:48:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:39.418 17:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:39.418 [2024-11-20 17:48:21.171116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:39.418 17:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:40.792 17:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:40.792 [2024-11-20 17:48:22.522249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 [2024-11-20 17:48:22.522879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92cb90 is same with the state(6) to be set 00:27:40.792 17:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 109375 00:27:47.379 { 00:27:47.379 "results": [ 00:27:47.379 { 00:27:47.379 "job": "NVMe0n1", 00:27:47.379 "core_mask": "0x1", 00:27:47.379 "workload": "verify", 00:27:47.379 "status": "finished", 00:27:47.379 "verify_range": { 00:27:47.379 "start": 0, 00:27:47.379 "length": 16384 00:27:47.379 }, 00:27:47.379 "queue_depth": 128, 00:27:47.379 "io_size": 4096, 00:27:47.379 "runtime": 15.112921, 00:27:47.379 "iops": 8434.63682500557, 00:27:47.379 "mibps": 32.947800097678005, 00:27:47.379 "io_failed": 3293, 00:27:47.379 "io_timeout": 0, 00:27:47.379 "avg_latency_us": 14759.243148493306, 00:27:47.379 "min_latency_us": 621.8472727272728, 00:27:47.379 "max_latency_us": 137268.13090909092 00:27:47.379 } 00:27:47.379 ], 00:27:47.379 "core_count": 1 00:27:47.379 } 00:27:47.379 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 109331 00:27:47.379 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 109331 ']' 00:27:47.379 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 109331 00:27:47.379 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:47.379 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.380 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109331 00:27:47.380 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:47.380 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:47.380 killing process with pid 109331 00:27:47.380 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109331' 00:27:47.380 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 109331 00:27:47.380 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 109331 00:27:47.380 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:47.380 [2024-11-20 17:48:11.064687] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:47.380 [2024-11-20 17:48:11.064815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109331 ] 00:27:47.380 [2024-11-20 17:48:11.204165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.380 [2024-11-20 17:48:11.325283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.380 Running I/O for 15 seconds... 00:27:47.380 8687.00 IOPS, 33.93 MiB/s [2024-11-20T17:48:29.168Z] [2024-11-20 17:48:14.107802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.107868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.107899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.107916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.107948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.107965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.107979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.107996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.108011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.108043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.108091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.108125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.380 [2024-11-20 17:48:14.108158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.380 [2024-11-20 17:48:14.108848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.380 [2024-11-20 17:48:14.108863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.108880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.108894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.108911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.108926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.108943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.108958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.108975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.108990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.109021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.109052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.109099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.109140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.109171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.109203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.381 [2024-11-20 17:48:14.109236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.381 [2024-11-20 17:48:14.109884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.381 [2024-11-20 17:48:14.109900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.109915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.109932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.109953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.109970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.109985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.382 [2024-11-20 17:48:14.110850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-11-20 17:48:14.110881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.382 [2024-11-20 17:48:14.110898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.382 [2024-11-20 17:48:14.110924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.110943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.110958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.110974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.110989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.383 [2024-11-20 17:48:14.111946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.383 [2024-11-20 17:48:14.111977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.111993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.383 [2024-11-20 17:48:14.112007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.112024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.383 [2024-11-20 17:48:14.112038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.112055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.383 [2024-11-20 17:48:14.112083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.383 [2024-11-20 17:48:14.112100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.383 [2024-11-20 17:48:14.112122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:14.112154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:14.112184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59b000 is same with the state(6) to be set 00:27:47.384 [2024-11-20 17:48:14.112227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.384 [2024-11-20 17:48:14.112239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.384 [2024-11-20 17:48:14.112251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:27:47.384 [2024-11-20 17:48:14.112265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112355] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x59b000 was disconnected and freed. reset controller. 00:27:47.384 [2024-11-20 17:48:14.112376] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:27:47.384 [2024-11-20 17:48:14.112440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.384 [2024-11-20 17:48:14.112462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.384 [2024-11-20 17:48:14.112495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.384 [2024-11-20 17:48:14.112525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.384 [2024-11-20 17:48:14.112554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:14.112568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.384 [2024-11-20 17:48:14.112624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57ace0 (9): Bad file descriptor 00:27:47.384 [2024-11-20 17:48:14.116454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.384 [2024-11-20 17:48:14.148356] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:47.384 8660.00 IOPS, 33.83 MiB/s [2024-11-20T17:48:29.172Z] 8699.67 IOPS, 33.98 MiB/s [2024-11-20T17:48:29.172Z] 8762.75 IOPS, 34.23 MiB/s [2024-11-20T17:48:29.172Z] [2024-11-20 17:48:17.852261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.852975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.852989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.853006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.384 [2024-11-20 17:48:17.853020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.384 [2024-11-20 17:48:17.853037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.385 [2024-11-20 17:48:17.853561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.385 [2024-11-20 17:48:17.853880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.385 [2024-11-20 17:48:17.853897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.853911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.853928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.853943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.853960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.853975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.853991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.854005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.854037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.386 [2024-11-20 17:48:17.854843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.854874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.854904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.386 [2024-11-20 17:48:17.854944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.386 [2024-11-20 17:48:17.854961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.854977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.854991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.387 [2024-11-20 17:48:17.855959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.855975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.387 [2024-11-20 17:48:17.855990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.387 [2024-11-20 17:48:17.856006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.387 [2024-11-20 17:48:17.856021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.388 [2024-11-20 17:48:17.856609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.388 [2024-11-20 17:48:17.856682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.388 [2024-11-20 17:48:17.856694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91744 len:8 PRP1 0x0 PRP2 0x0 00:27:47.388 [2024-11-20 17:48:17.856715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856808] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x59cf50 was disconnected and freed. reset controller. 00:27:47.388 [2024-11-20 17:48:17.856830] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:27:47.388 [2024-11-20 17:48:17.856899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.388 [2024-11-20 17:48:17.856921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.388 [2024-11-20 17:48:17.856952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.856974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.388 [2024-11-20 17:48:17.856988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.857004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.388 [2024-11-20 17:48:17.857018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:17.857033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.388 [2024-11-20 17:48:17.857140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57ace0 (9): Bad file descriptor 00:27:47.388 [2024-11-20 17:48:17.861079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.388 [2024-11-20 17:48:17.897930] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:47.388 8674.60 IOPS, 33.89 MiB/s [2024-11-20T17:48:29.176Z] 8704.17 IOPS, 34.00 MiB/s [2024-11-20T17:48:29.176Z] 8731.14 IOPS, 34.11 MiB/s [2024-11-20T17:48:29.176Z] 8722.38 IOPS, 34.07 MiB/s [2024-11-20T17:48:29.176Z] 8734.78 IOPS, 34.12 MiB/s [2024-11-20T17:48:29.176Z] [2024-11-20 17:48:22.523187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-11-20 17:48:22.523241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:22.523269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-11-20 17:48:22.523287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:22.523304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-11-20 17:48:22.523319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:22.523336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-11-20 17:48:22.523351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:22.523367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-11-20 17:48:22.523407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.388 [2024-11-20 17:48:22.523424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.388 [2024-11-20 17:48:22.523438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.523979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.523993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.389 [2024-11-20 17:48:22.524511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-11-20 17:48:22.524525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.390 [2024-11-20 17:48:22.524777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.524813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.524844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.524875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.524905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.524935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.524973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.524989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.390 [2024-11-20 17:48:22.525392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.390 [2024-11-20 17:48:22.525408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.391 [2024-11-20 17:48:22.525445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.391 [2024-11-20 17:48:22.525476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.391 [2024-11-20 17:48:22.525512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.391 [2024-11-20 17:48:22.525543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.391 [2024-11-20 17:48:22.525573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.391 [2024-11-20 17:48:22.525604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.525974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.525988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.391 [2024-11-20 17:48:22.526494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.391 [2024-11-20 17:48:22.526510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.526966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.526982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.392 [2024-11-20 17:48:22.527002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.392 [2024-11-20 17:48:22.527461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.392 [2024-11-20 17:48:22.527524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.392 [2024-11-20 17:48:22.527536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:27:47.392 [2024-11-20 17:48:22.527555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527619] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x59d970 was disconnected and freed. reset controller. 00:27:47.392 [2024-11-20 17:48:22.527638] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:27:47.392 [2024-11-20 17:48:22.527697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.392 [2024-11-20 17:48:22.527719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.392 [2024-11-20 17:48:22.527746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.393 [2024-11-20 17:48:22.527760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-11-20 17:48:22.527775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.393 [2024-11-20 17:48:22.527790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-11-20 17:48:22.527805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:47.393 [2024-11-20 17:48:22.527819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.393 [2024-11-20 17:48:22.527833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.393 [2024-11-20 17:48:22.527873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57ace0 (9): Bad file descriptor 00:27:47.393 [2024-11-20 17:48:22.531749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.393 [2024-11-20 17:48:22.569755] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:47.393 8670.80 IOPS, 33.87 MiB/s [2024-11-20T17:48:29.181Z] 8639.00 IOPS, 33.75 MiB/s [2024-11-20T17:48:29.181Z] 8614.58 IOPS, 33.65 MiB/s [2024-11-20T17:48:29.181Z] 8589.15 IOPS, 33.55 MiB/s [2024-11-20T17:48:29.181Z] 8559.93 IOPS, 33.44 MiB/s [2024-11-20T17:48:29.181Z] 8489.60 IOPS, 33.16 MiB/s 00:27:47.393 Latency(us) 00:27:47.393 [2024-11-20T17:48:29.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.393 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:47.393 Verification LBA range: start 0x0 length 0x4000 00:27:47.393 NVMe0n1 : 15.11 8434.64 32.95 217.89 0.00 14759.24 621.85 137268.13 00:27:47.393 [2024-11-20T17:48:29.181Z] =================================================================================================================== 00:27:47.393 [2024-11-20T17:48:29.181Z] Total : 8434.64 32.95 217.89 0.00 14759.24 621.85 137268.13 00:27:47.393 Received shutdown signal, test time was about 15.000000 seconds 00:27:47.393 00:27:47.393 Latency(us) 00:27:47.393 [2024-11-20T17:48:29.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.393 [2024-11-20T17:48:29.181Z] =================================================================================================================== 00:27:47.393 [2024-11-20T17:48:29.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=109590 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 109590 /var/tmp/bdevperf.sock 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 109590 ']' 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:47.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:47.393 17:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:47.651 17:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:47.651 17:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:47.651 17:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:47.910 [2024-11-20 17:48:29.676116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:48.169 17:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:48.169 [2024-11-20 17:48:29.932439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:27:48.169 17:48:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:48.736 NVMe0n1 00:27:48.736 17:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:48.995 00:27:48.995 17:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.253 00:27:49.253 17:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.253 17:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:49.512 17:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.771 17:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:53.055 17:48:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:53.055 17:48:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.055 17:48:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:53.055 17:48:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=109727 00:27:53.055 17:48:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 109727 00:27:54.478 { 00:27:54.478 "results": [ 00:27:54.478 { 00:27:54.478 "job": "NVMe0n1", 00:27:54.478 "core_mask": "0x1", 00:27:54.478 "workload": "verify", 00:27:54.478 "status": "finished", 00:27:54.478 "verify_range": { 00:27:54.478 "start": 0, 00:27:54.478 "length": 16384 00:27:54.478 }, 00:27:54.478 "queue_depth": 128, 00:27:54.478 "io_size": 4096, 00:27:54.478 "runtime": 1.009779, 00:27:54.478 "iops": 8179.017388953424, 00:27:54.478 "mibps": 31.949286675599314, 00:27:54.478 "io_failed": 0, 00:27:54.478 "io_timeout": 0, 00:27:54.478 "avg_latency_us": 15553.004652555337, 00:27:54.478 "min_latency_us": 2085.2363636363634, 00:27:54.478 "max_latency_us": 15847.796363636364 00:27:54.478 } 00:27:54.478 ], 00:27:54.478 "core_count": 1 00:27:54.478 } 00:27:54.478 17:48:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:54.478 [2024-11-20 17:48:28.370940] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:54.478 [2024-11-20 17:48:28.371635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109590 ] 00:27:54.478 [2024-11-20 17:48:28.521872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.478 [2024-11-20 17:48:28.663492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.478 [2024-11-20 17:48:31.454908] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:27:54.478 [2024-11-20 17:48:31.455038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.478 [2024-11-20 17:48:31.455078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.478 [2024-11-20 17:48:31.455100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.478 [2024-11-20 17:48:31.455115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.478 [2024-11-20 17:48:31.455130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.478 [2024-11-20 17:48:31.455145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.478 [2024-11-20 17:48:31.455160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.478 [2024-11-20 17:48:31.455174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.478 [2024-11-20 17:48:31.455189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.478 [2024-11-20 17:48:31.455244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.478 [2024-11-20 17:48:31.455275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d4ce0 (9): Bad file descriptor 00:27:54.478 [2024-11-20 17:48:31.465967] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.478 Running I/O for 1 seconds... 00:27:54.478 8131.00 IOPS, 31.76 MiB/s 00:27:54.478 Latency(us) 00:27:54.478 [2024-11-20T17:48:36.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.478 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:54.478 Verification LBA range: start 0x0 length 0x4000 00:27:54.478 NVMe0n1 : 1.01 8179.02 31.95 0.00 0.00 15553.00 2085.24 15847.80 00:27:54.478 [2024-11-20T17:48:36.266Z] =================================================================================================================== 00:27:54.478 [2024-11-20T17:48:36.266Z] Total : 8179.02 31.95 0.00 0.00 15553.00 2085.24 15847.80 00:27:54.478 17:48:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:54.478 17:48:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.478 17:48:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.737 17:48:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.737 17:48:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:54.995 17:48:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.569 17:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 109590 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 109590 ']' 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 109590 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109590 00:27:58.856 killing process with pid 109590 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109590' 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 109590 00:27:58.856 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 109590 00:27:59.114 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:59.114 17:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:59.681 rmmod nvme_tcp 00:27:59.681 rmmod nvme_fabrics 00:27:59.681 rmmod nvme_keyring 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 109216 ']' 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 109216 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 109216 ']' 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 109216 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109216 00:27:59.681 killing process with pid 109216 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109216' 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 109216 00:27:59.681 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 109216 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:59.939 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:28:00.196 00:28:00.196 real 0m34.771s 00:28:00.196 user 2m13.733s 00:28:00.196 sys 0m4.821s 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:00.196 ************************************ 00:28:00.196 END TEST nvmf_failover 00:28:00.196 ************************************ 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.196 17:48:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.456 ************************************ 00:28:00.456 START TEST nvmf_host_discovery 00:28:00.456 ************************************ 00:28:00.456 17:48:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:00.456 * Looking for test storage... 00:28:00.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.456 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.457 --rc genhtml_branch_coverage=1 00:28:00.457 --rc genhtml_function_coverage=1 00:28:00.457 --rc genhtml_legend=1 00:28:00.457 --rc geninfo_all_blocks=1 00:28:00.457 --rc geninfo_unexecuted_blocks=1 00:28:00.457 00:28:00.457 ' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.457 --rc genhtml_branch_coverage=1 00:28:00.457 --rc genhtml_function_coverage=1 00:28:00.457 --rc genhtml_legend=1 00:28:00.457 --rc geninfo_all_blocks=1 00:28:00.457 --rc geninfo_unexecuted_blocks=1 00:28:00.457 00:28:00.457 ' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.457 --rc genhtml_branch_coverage=1 00:28:00.457 --rc genhtml_function_coverage=1 00:28:00.457 --rc genhtml_legend=1 00:28:00.457 --rc geninfo_all_blocks=1 00:28:00.457 --rc geninfo_unexecuted_blocks=1 00:28:00.457 00:28:00.457 ' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:00.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.457 --rc genhtml_branch_coverage=1 00:28:00.457 --rc genhtml_function_coverage=1 00:28:00.457 --rc genhtml_legend=1 00:28:00.457 --rc geninfo_all_blocks=1 00:28:00.457 --rc geninfo_unexecuted_blocks=1 00:28:00.457 00:28:00.457 ' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:00.457 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:00.458 Cannot find device "nvmf_init_br" 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:00.458 Cannot find device "nvmf_init_br2" 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:00.458 Cannot find device "nvmf_tgt_br" 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:00.458 Cannot find device "nvmf_tgt_br2" 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:28:00.458 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:00.715 Cannot find device "nvmf_init_br" 00:28:00.715 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:00.716 Cannot find device "nvmf_init_br2" 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:00.716 Cannot find device "nvmf_tgt_br" 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:00.716 Cannot find device "nvmf_tgt_br2" 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:00.716 Cannot find device "nvmf_br" 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:00.716 Cannot find device "nvmf_init_if" 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:00.716 Cannot find device "nvmf_init_if2" 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:00.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:00.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:00.716 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:00.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:00.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.178 ms 00:28:00.974 00:28:00.974 --- 10.0.0.3 ping statistics --- 00:28:00.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.974 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:00.974 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:00.974 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:28:00.974 00:28:00.974 --- 10.0.0.4 ping statistics --- 00:28:00.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.974 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:00.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:28:00.974 00:28:00.974 --- 10.0.0.1 ping statistics --- 00:28:00.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.974 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:00.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:28:00.974 00:28:00.974 --- 10.0.0.2 ping statistics --- 00:28:00.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.974 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=110103 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 110103 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 110103 ']' 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.974 17:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:00.974 [2024-11-20 17:48:42.680811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:00.974 [2024-11-20 17:48:42.680905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.233 [2024-11-20 17:48:42.816155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.233 [2024-11-20 17:48:42.945388] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.233 [2024-11-20 17:48:42.945455] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.233 [2024-11-20 17:48:42.945470] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.233 [2024-11-20 17:48:42.945481] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.233 [2024-11-20 17:48:42.945490] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.233 [2024-11-20 17:48:42.945534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.165 [2024-11-20 17:48:43.776809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.165 [2024-11-20 17:48:43.784967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.165 null0 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.165 null1 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=110153 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 110153 /tmp/host.sock 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 110153 ']' 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.165 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.165 17:48:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.165 [2024-11-20 17:48:43.894988] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:02.165 [2024-11-20 17:48:43.895170] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110153 ] 00:28:02.424 [2024-11-20 17:48:44.046155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.424 [2024-11-20 17:48:44.142145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 17:48:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.615 [2024-11-20 17:48:45.281490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:03.615 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.616 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:28:03.874 17:48:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:28:04.473 [2024-11-20 17:48:45.940513] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:04.473 [2024-11-20 17:48:45.940562] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:04.473 [2024-11-20 17:48:45.940586] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:04.473 [2024-11-20 17:48:46.027647] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:28:04.473 [2024-11-20 17:48:46.091978] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:04.473 [2024-11-20 17:48:46.092033] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:04.731 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:04.731 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:04.990 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.249 [2024-11-20 17:48:46.854731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:05.249 [2024-11-20 17:48:46.855166] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:05.249 [2024-11-20 17:48:46.855204] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:05.249 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:05.250 [2024-11-20 17:48:46.943209] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:05.250 17:48:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.250 [2024-11-20 17:48:47.003701] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:05.250 [2024-11-20 17:48:47.003731] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:05.250 [2024-11-20 17:48:47.003739] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:05.250 17:48:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:05.250 17:48:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.624 [2024-11-20 17:48:48.144443] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:06.624 [2024-11-20 17:48:48.144483] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:06.624 [2024-11-20 17:48:48.149461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.624 [2024-11-20 17:48:48.149497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.624 [2024-11-20 17:48:48.149511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.624 [2024-11-20 17:48:48.149521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:06.624 [2024-11-20 17:48:48.149532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.624 [2024-11-20 17:48:48.149542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.624 [2024-11-20 17:48:48.149553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.624 [2024-11-20 17:48:48.149562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.624 [2024-11-20 17:48:48.149572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:06.624 [2024-11-20 17:48:48.159410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.624 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.624 [2024-11-20 17:48:48.169432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.624 [2024-11-20 17:48:48.169598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.625 [2024-11-20 17:48:48.169623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87ffb0 with addr=10.0.0.3, port=4420 00:28:06.625 [2024-11-20 17:48:48.169635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.625 [2024-11-20 17:48:48.169655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.625 [2024-11-20 17:48:48.169671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.625 [2024-11-20 17:48:48.169680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.625 [2024-11-20 17:48:48.169693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.625 [2024-11-20 17:48:48.169710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.625 [2024-11-20 17:48:48.179506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.625 [2024-11-20 17:48:48.179593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.625 [2024-11-20 17:48:48.179614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87ffb0 with addr=10.0.0.3, port=4420 00:28:06.625 [2024-11-20 17:48:48.179626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.625 [2024-11-20 17:48:48.179650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.625 [2024-11-20 17:48:48.179665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.625 [2024-11-20 17:48:48.179673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.625 [2024-11-20 17:48:48.179682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.625 [2024-11-20 17:48:48.179697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.625 [2024-11-20 17:48:48.189560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.625 [2024-11-20 17:48:48.189649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.625 [2024-11-20 17:48:48.189671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87ffb0 with addr=10.0.0.3, port=4420 00:28:06.625 [2024-11-20 17:48:48.189682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.625 [2024-11-20 17:48:48.189698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.625 [2024-11-20 17:48:48.189713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.625 [2024-11-20 17:48:48.189721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.625 [2024-11-20 17:48:48.189731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.625 [2024-11-20 17:48:48.189746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.625 [2024-11-20 17:48:48.199615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.625 [2024-11-20 17:48:48.199696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.625 [2024-11-20 17:48:48.199716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87ffb0 with addr=10.0.0.3, port=4420 00:28:06.625 [2024-11-20 17:48:48.199727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.625 [2024-11-20 17:48:48.199743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.625 [2024-11-20 17:48:48.199757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.625 [2024-11-20 17:48:48.199767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.625 [2024-11-20 17:48:48.199776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.625 [2024-11-20 17:48:48.199791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.625 [2024-11-20 17:48:48.209666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.625 [2024-11-20 17:48:48.209753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.625 [2024-11-20 17:48:48.209774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87ffb0 with addr=10.0.0.3, port=4420 00:28:06.625 [2024-11-20 17:48:48.209785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.625 [2024-11-20 17:48:48.209802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.625 [2024-11-20 17:48:48.209817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.625 [2024-11-20 17:48:48.209826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:06.625 [2024-11-20 17:48:48.209835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.625 [2024-11-20 17:48:48.209851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:06.625 [2024-11-20 17:48:48.219993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.625 [2024-11-20 17:48:48.220093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.625 [2024-11-20 17:48:48.220115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87ffb0 with addr=10.0.0.3, port=4420 00:28:06.625 [2024-11-20 17:48:48.220127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.625 [2024-11-20 17:48:48.220144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.625 [2024-11-20 17:48:48.220158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.625 [2024-11-20 17:48:48.220167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.625 [2024-11-20 17:48:48.220185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.625 [2024-11-20 17:48:48.220201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.625 [2024-11-20 17:48:48.230044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.625 [2024-11-20 17:48:48.230136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.625 [2024-11-20 17:48:48.230164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87ffb0 with addr=10.0.0.3, port=4420 00:28:06.625 [2024-11-20 17:48:48.230176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ffb0 is same with the state(6) to be set 00:28:06.625 [2024-11-20 17:48:48.230192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ffb0 (9): Bad file descriptor 00:28:06.625 [2024-11-20 17:48:48.230207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.625 [2024-11-20 17:48:48.230215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.625 [2024-11-20 17:48:48.230224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.625 [2024-11-20 17:48:48.230239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.625 [2024-11-20 17:48:48.231022] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:28:06.625 [2024-11-20 17:48:48.231056] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:06.625 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:06.626 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:06.884 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.885 17:48:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.819 [2024-11-20 17:48:49.590969] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:07.819 [2024-11-20 17:48:49.591347] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:07.819 [2024-11-20 17:48:49.591424] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:08.078 [2024-11-20 17:48:49.680271] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:28:08.078 [2024-11-20 17:48:49.747111] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:08.078 [2024-11-20 17:48:49.747434] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.078 2024/11/20 17:48:49 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:28:08.078 request: 00:28:08.078 { 00:28:08.078 "method": "bdev_nvme_start_discovery", 00:28:08.078 "params": { 00:28:08.078 "name": "nvme", 00:28:08.078 "trtype": "tcp", 00:28:08.078 "traddr": "10.0.0.3", 00:28:08.078 "adrfam": "ipv4", 00:28:08.078 "trsvcid": "8009", 00:28:08.078 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:08.078 "wait_for_attach": true 00:28:08.078 } 00:28:08.078 } 00:28:08.078 Got JSON-RPC error response 00:28:08.078 GoRPCClient: error on JSON-RPC call 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:08.078 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.079 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.337 2024/11/20 17:48:49 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:28:08.337 request: 00:28:08.337 { 00:28:08.337 "method": "bdev_nvme_start_discovery", 00:28:08.337 "params": { 00:28:08.337 "name": "nvme_second", 00:28:08.337 "trtype": "tcp", 00:28:08.337 "traddr": "10.0.0.3", 00:28:08.337 "adrfam": "ipv4", 00:28:08.337 "trsvcid": "8009", 00:28:08.337 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:08.337 "wait_for_attach": true 00:28:08.337 } 00:28:08.337 } 00:28:08.337 Got JSON-RPC error response 00:28:08.337 GoRPCClient: error on JSON-RPC call 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:08.337 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.338 17:48:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.338 17:48:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.272 [2024-11-20 17:48:51.019959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.272 [2024-11-20 17:48:51.020292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b1f0 with addr=10.0.0.3, port=8010 00:28:09.272 [2024-11-20 17:48:51.020331] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:09.272 [2024-11-20 17:48:51.020343] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:09.272 [2024-11-20 17:48:51.020354] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:28:10.648 [2024-11-20 17:48:52.019988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.648 [2024-11-20 17:48:52.020350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b1f0 with addr=10.0.0.3, port=8010 00:28:10.648 [2024-11-20 17:48:52.020534] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:10.648 [2024-11-20 17:48:52.020826] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:10.648 [2024-11-20 17:48:52.020849] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:28:11.582 [2024-11-20 17:48:53.019772] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:28:11.582 2024/11/20 17:48:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:28:11.582 request: 00:28:11.582 { 00:28:11.582 "method": "bdev_nvme_start_discovery", 00:28:11.582 "params": { 00:28:11.582 "name": "nvme_second", 00:28:11.582 "trtype": "tcp", 00:28:11.582 "traddr": "10.0.0.3", 00:28:11.582 "adrfam": "ipv4", 00:28:11.582 "trsvcid": "8010", 00:28:11.582 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:11.583 "wait_for_attach": false, 00:28:11.583 "attach_timeout_ms": 3000 00:28:11.583 } 00:28:11.583 } 00:28:11.583 Got JSON-RPC error response 00:28:11.583 GoRPCClient: error on JSON-RPC call 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 110153 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.583 rmmod nvme_tcp 00:28:11.583 rmmod nvme_fabrics 00:28:11.583 rmmod nvme_keyring 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 110103 ']' 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 110103 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 110103 ']' 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 110103 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110103 00:28:11.583 killing process with pid 110103 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110103' 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 110103 00:28:11.583 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 110103 00:28:11.840 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:11.841 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:28:12.098 00:28:12.098 real 0m11.839s 00:28:12.098 user 0m22.553s 00:28:12.098 sys 0m1.873s 00:28:12.098 ************************************ 00:28:12.098 END TEST nvmf_host_discovery 00:28:12.098 ************************************ 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.098 ************************************ 00:28:12.098 START TEST nvmf_host_multipath_status 00:28:12.098 ************************************ 00:28:12.098 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:12.357 * Looking for test storage... 00:28:12.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:12.357 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:12.357 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:12.357 17:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:12.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.357 --rc genhtml_branch_coverage=1 00:28:12.357 --rc genhtml_function_coverage=1 00:28:12.357 --rc genhtml_legend=1 00:28:12.357 --rc geninfo_all_blocks=1 00:28:12.357 --rc geninfo_unexecuted_blocks=1 00:28:12.357 00:28:12.357 ' 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:12.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.357 --rc genhtml_branch_coverage=1 00:28:12.357 --rc genhtml_function_coverage=1 00:28:12.357 --rc genhtml_legend=1 00:28:12.357 --rc geninfo_all_blocks=1 00:28:12.357 --rc geninfo_unexecuted_blocks=1 00:28:12.357 00:28:12.357 ' 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:12.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.357 --rc genhtml_branch_coverage=1 00:28:12.357 --rc genhtml_function_coverage=1 00:28:12.357 --rc genhtml_legend=1 00:28:12.357 --rc geninfo_all_blocks=1 00:28:12.357 --rc geninfo_unexecuted_blocks=1 00:28:12.357 00:28:12.357 ' 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:12.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.357 --rc genhtml_branch_coverage=1 00:28:12.357 --rc genhtml_function_coverage=1 00:28:12.357 --rc genhtml_legend=1 00:28:12.357 --rc geninfo_all_blocks=1 00:28:12.357 --rc geninfo_unexecuted_blocks=1 00:28:12.357 00:28:12.357 ' 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.357 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:12.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:12.358 Cannot find device "nvmf_init_br" 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:12.358 Cannot find device "nvmf_init_br2" 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:28:12.358 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:12.617 Cannot find device "nvmf_tgt_br" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:12.617 Cannot find device "nvmf_tgt_br2" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:12.617 Cannot find device "nvmf_init_br" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:12.617 Cannot find device "nvmf_init_br2" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:12.617 Cannot find device "nvmf_tgt_br" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:12.617 Cannot find device "nvmf_tgt_br2" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:12.617 Cannot find device "nvmf_br" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:12.617 Cannot find device "nvmf_init_if" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:12.617 Cannot find device "nvmf_init_if2" 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:12.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:12.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:12.617 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:12.876 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:12.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:12.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:28:12.877 00:28:12.877 --- 10.0.0.3 ping statistics --- 00:28:12.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.877 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:12.877 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:12.877 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:28:12.877 00:28:12.877 --- 10.0.0.4 ping statistics --- 00:28:12.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.877 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:12.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:28:12.877 00:28:12.877 --- 10.0.0.1 ping statistics --- 00:28:12.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.877 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:12.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:28:12.877 00:28:12.877 --- 10.0.0.2 ping statistics --- 00:28:12.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.877 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=110690 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 110690 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 110690 ']' 00:28:12.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.877 17:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:12.877 [2024-11-20 17:48:54.649643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:12.877 [2024-11-20 17:48:54.649772] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.136 [2024-11-20 17:48:54.790019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:13.136 [2024-11-20 17:48:54.881029] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.136 [2024-11-20 17:48:54.881340] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.136 [2024-11-20 17:48:54.881521] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.136 [2024-11-20 17:48:54.881648] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.136 [2024-11-20 17:48:54.881702] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.136 [2024-11-20 17:48:54.885122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.136 [2024-11-20 17:48:54.885158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=110690 00:28:13.395 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:13.653 [2024-11-20 17:48:55.363435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.653 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:13.912 Malloc0 00:28:14.170 17:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:14.429 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:14.688 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:14.946 [2024-11-20 17:48:56.637662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:14.946 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:15.205 [2024-11-20 17:48:56.913991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:15.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=110786 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 110786 /var/tmp/bdevperf.sock 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 110786 ']' 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.205 17:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:16.583 17:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.583 17:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:28:16.583 17:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:16.841 17:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:28:17.099 Nvme0n1 00:28:17.099 17:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:17.665 Nvme0n1 00:28:17.665 17:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:17.665 17:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:19.596 17:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:19.596 17:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:28:19.853 17:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:20.111 17:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:21.489 17:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:21.489 17:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:21.489 17:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.489 17:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:21.489 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.489 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:21.489 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.489 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:21.808 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:21.808 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:21.808 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.808 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:22.066 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.066 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:22.066 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.066 17:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:22.324 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.325 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:22.325 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.325 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:22.583 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.583 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:22.583 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.583 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:23.150 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.150 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:23.150 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:23.150 17:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:23.409 17:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.791 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:25.052 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.052 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:25.052 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.052 17:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:25.310 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.310 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:25.310 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.310 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:25.617 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.617 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:25.617 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.617 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:25.875 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.875 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:26.133 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.133 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:26.391 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.391 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:26.391 17:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:26.649 17:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:28:26.907 17:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:27.841 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:27.841 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:27.841 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.841 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:28.099 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.099 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:28.099 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.099 17:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.665 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:29.231 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.231 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:29.231 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:29.231 17:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.489 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.489 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:29.489 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.489 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:29.747 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.747 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:29.747 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:30.007 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:30.265 17:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:31.200 17:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:31.201 17:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:31.201 17:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.201 17:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:31.770 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.770 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:31.770 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:31.770 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.029 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:32.029 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:32.029 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:32.029 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.287 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.287 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:32.287 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.287 17:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:32.553 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.553 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:32.553 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.553 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:32.828 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.828 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:32.828 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.828 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:33.088 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:33.088 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:33.088 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:33.346 17:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:33.605 17:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:34.541 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:34.541 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:34.541 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:34.541 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.801 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:34.801 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:34.801 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.801 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:35.367 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:35.367 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:35.367 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.367 17:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:35.626 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.626 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:35.626 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.626 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:35.884 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.884 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:35.884 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.884 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:36.143 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:36.143 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:36.143 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.143 17:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:36.401 17:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:36.401 17:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:36.401 17:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:36.660 17:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:36.919 17:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:38.295 17:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.553 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.553 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:38.553 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.553 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:38.966 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.966 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:38.966 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:38.966 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.256 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.256 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:39.256 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.256 17:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:39.515 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:39.515 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:39.515 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.515 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:39.773 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.773 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:40.030 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:40.030 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:28:40.288 17:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:40.546 17:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:41.481 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:41.481 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:41.481 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.481 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:42.047 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.047 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:42.047 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.047 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:42.305 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.305 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:42.305 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.305 17:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:42.563 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.563 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:42.563 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:42.563 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.821 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.821 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:42.821 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.821 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:43.078 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.078 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:43.078 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.078 17:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:43.336 17:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.336 17:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:43.336 17:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:43.594 17:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:44.160 17:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:45.094 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:45.094 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:45.094 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.094 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:45.352 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:45.352 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:45.352 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.352 17:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:45.610 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.610 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:45.610 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.610 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:45.868 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.868 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:45.868 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.868 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:46.126 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.126 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:46.126 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.126 17:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:46.384 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.384 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:46.384 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.384 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:46.642 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.642 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:46.642 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:46.900 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:28:47.158 17:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:48.639 17:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:48.639 17:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:48.639 17:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.639 17:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:48.639 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.639 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:48.639 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.639 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:48.897 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.897 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:48.897 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.897 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:49.156 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.156 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:49.156 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.156 17:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:49.414 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.414 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:49.414 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.414 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:49.980 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.980 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:49.980 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.980 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:50.239 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.239 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:50.239 17:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:50.497 17:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:50.756 17:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:51.690 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:51.690 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:51.690 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.690 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:51.949 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.949 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:52.208 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.208 17:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:52.467 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:52.467 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:52.467 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.467 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:52.725 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.725 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:52.725 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.725 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:52.984 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.984 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:52.984 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.984 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:53.244 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.244 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:53.244 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.244 17:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 110786 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 110786 ']' 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 110786 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110786 00:28:53.502 killing process with pid 110786 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110786' 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 110786 00:28:53.502 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 110786 00:28:53.503 { 00:28:53.503 "results": [ 00:28:53.503 { 00:28:53.503 "job": "Nvme0n1", 00:28:53.503 "core_mask": "0x4", 00:28:53.503 "workload": "verify", 00:28:53.503 "status": "terminated", 00:28:53.503 "verify_range": { 00:28:53.503 "start": 0, 00:28:53.503 "length": 16384 00:28:53.503 }, 00:28:53.503 "queue_depth": 128, 00:28:53.503 "io_size": 4096, 00:28:53.503 "runtime": 35.887727, 00:28:53.503 "iops": 7735.820103624841, 00:28:53.503 "mibps": 30.218047279784535, 00:28:53.503 "io_failed": 0, 00:28:53.503 "io_timeout": 0, 00:28:53.503 "avg_latency_us": 16516.377304074784, 00:28:53.503 "min_latency_us": 415.18545454545455, 00:28:53.503 "max_latency_us": 4026531.84 00:28:53.503 } 00:28:53.503 ], 00:28:53.503 "core_count": 1 00:28:53.503 } 00:28:54.073 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 110786 00:28:54.073 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:54.073 [2024-11-20 17:48:57.021126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:54.073 [2024-11-20 17:48:57.021366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110786 ] 00:28:54.073 [2024-11-20 17:48:57.171707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.073 [2024-11-20 17:48:57.310147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.073 [2024-11-20 17:48:59.169582] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:28:54.074 Running I/O for 90 seconds... 00:28:54.074 8129.00 IOPS, 31.75 MiB/s [2024-11-20T17:49:35.862Z] 8065.00 IOPS, 31.50 MiB/s [2024-11-20T17:49:35.862Z] 8145.00 IOPS, 31.82 MiB/s [2024-11-20T17:49:35.862Z] 8207.25 IOPS, 32.06 MiB/s [2024-11-20T17:49:35.862Z] 8235.20 IOPS, 32.17 MiB/s [2024-11-20T17:49:35.862Z] 8280.00 IOPS, 32.34 MiB/s [2024-11-20T17:49:35.862Z] 8320.00 IOPS, 32.50 MiB/s [2024-11-20T17:49:35.862Z] 8334.00 IOPS, 32.55 MiB/s [2024-11-20T17:49:35.862Z] 8357.78 IOPS, 32.65 MiB/s [2024-11-20T17:49:35.862Z] 8376.20 IOPS, 32.72 MiB/s [2024-11-20T17:49:35.862Z] 8383.64 IOPS, 32.75 MiB/s [2024-11-20T17:49:35.862Z] 8380.17 IOPS, 32.74 MiB/s [2024-11-20T17:49:35.862Z] 8374.77 IOPS, 32.71 MiB/s [2024-11-20T17:49:35.862Z] 8371.43 IOPS, 32.70 MiB/s [2024-11-20T17:49:35.862Z] 8372.13 IOPS, 32.70 MiB/s [2024-11-20T17:49:35.862Z] [2024-11-20 17:49:14.978212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-11-20 17:49:14.978320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.978963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.978981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.979971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.979986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.980009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.980024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.980046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.980062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:54.074 [2024-11-20 17:49:14.980084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-11-20 17:49:14.980099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.980960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.980977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-11-20 17:49:14.981464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:54.075 [2024-11-20 17:49:14.981485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.981974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.981995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.982956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.982973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:54.076 [2024-11-20 17:49:14.983000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.076 [2024-11-20 17:49:14.983028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.983959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.983992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:54.077 [2024-11-20 17:49:14.984603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.077 [2024-11-20 17:49:14.984618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:14.984644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:14.984659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:54.078 8159.31 IOPS, 31.87 MiB/s [2024-11-20T17:49:35.866Z] 7679.35 IOPS, 30.00 MiB/s [2024-11-20T17:49:35.866Z] 7252.72 IOPS, 28.33 MiB/s [2024-11-20T17:49:35.866Z] 6871.00 IOPS, 26.84 MiB/s [2024-11-20T17:49:35.866Z] 6705.00 IOPS, 26.19 MiB/s [2024-11-20T17:49:35.866Z] 6820.57 IOPS, 26.64 MiB/s [2024-11-20T17:49:35.866Z] 6919.32 IOPS, 27.03 MiB/s [2024-11-20T17:49:35.866Z] 7004.04 IOPS, 27.36 MiB/s [2024-11-20T17:49:35.866Z] 7106.54 IOPS, 27.76 MiB/s [2024-11-20T17:49:35.866Z] 7199.12 IOPS, 28.12 MiB/s [2024-11-20T17:49:35.866Z] 7288.12 IOPS, 28.47 MiB/s [2024-11-20T17:49:35.866Z] 7352.63 IOPS, 28.72 MiB/s [2024-11-20T17:49:35.866Z] 7407.39 IOPS, 28.94 MiB/s [2024-11-20T17:49:35.866Z] 7457.10 IOPS, 29.13 MiB/s [2024-11-20T17:49:35.866Z] 7513.30 IOPS, 29.35 MiB/s [2024-11-20T17:49:35.866Z] 7570.03 IOPS, 29.57 MiB/s [2024-11-20T17:49:35.866Z] 7627.91 IOPS, 29.80 MiB/s [2024-11-20T17:49:35.866Z] 7677.36 IOPS, 29.99 MiB/s [2024-11-20T17:49:35.866Z] [2024-11-20 17:49:32.397102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.397194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.397338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.397360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.397398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.397429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.397466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.397480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.397500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.397516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.397536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.397550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.398267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.398310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.398348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.398382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.398431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.398961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.398978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.399000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.399018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.399040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.399056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.399105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.399124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.399147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.399163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.399186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.399202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.399238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.078 [2024-11-20 17:49:32.399296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:54.078 [2024-11-20 17:49:32.399317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.078 [2024-11-20 17:49:32.399332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.079 [2024-11-20 17:49:32.399353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.079 [2024-11-20 17:49:32.399367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:54.079 7700.56 IOPS, 30.08 MiB/s [2024-11-20T17:49:35.867Z] 7719.80 IOPS, 30.16 MiB/s [2024-11-20T17:49:35.867Z] Received shutdown signal, test time was about 35.888468 seconds 00:28:54.079 00:28:54.079 Latency(us) 00:28:54.079 [2024-11-20T17:49:35.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.079 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:54.079 Verification LBA range: start 0x0 length 0x4000 00:28:54.079 Nvme0n1 : 35.89 7735.82 30.22 0.00 0.00 16516.38 415.19 4026531.84 00:28:54.079 [2024-11-20T17:49:35.867Z] =================================================================================================================== 00:28:54.079 [2024-11-20T17:49:35.867Z] Total : 7735.82 30.22 0.00 0.00 16516.38 415.19 4026531.84 00:28:54.079 [2024-11-20 17:49:35.278160] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:28:54.079 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.338 rmmod nvme_tcp 00:28:54.338 rmmod nvme_fabrics 00:28:54.338 rmmod nvme_keyring 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 110690 ']' 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 110690 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 110690 ']' 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 110690 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.338 17:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110690 00:28:54.338 killing process with pid 110690 00:28:54.338 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:54.338 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:54.338 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110690' 00:28:54.338 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 110690 00:28:54.338 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 110690 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:54.596 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:28:54.854 00:28:54.854 real 0m42.630s 00:28:54.854 user 2m19.806s 00:28:54.854 sys 0m10.426s 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:54.854 ************************************ 00:28:54.854 END TEST nvmf_host_multipath_status 00:28:54.854 ************************************ 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.854 ************************************ 00:28:54.854 START TEST nvmf_discovery_remove_ifc 00:28:54.854 ************************************ 00:28:54.854 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:55.114 * Looking for test storage... 00:28:55.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.114 --rc genhtml_branch_coverage=1 00:28:55.114 --rc genhtml_function_coverage=1 00:28:55.114 --rc genhtml_legend=1 00:28:55.114 --rc geninfo_all_blocks=1 00:28:55.114 --rc geninfo_unexecuted_blocks=1 00:28:55.114 00:28:55.114 ' 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.114 --rc genhtml_branch_coverage=1 00:28:55.114 --rc genhtml_function_coverage=1 00:28:55.114 --rc genhtml_legend=1 00:28:55.114 --rc geninfo_all_blocks=1 00:28:55.114 --rc geninfo_unexecuted_blocks=1 00:28:55.114 00:28:55.114 ' 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.114 --rc genhtml_branch_coverage=1 00:28:55.114 --rc genhtml_function_coverage=1 00:28:55.114 --rc genhtml_legend=1 00:28:55.114 --rc geninfo_all_blocks=1 00:28:55.114 --rc geninfo_unexecuted_blocks=1 00:28:55.114 00:28:55.114 ' 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.114 --rc genhtml_branch_coverage=1 00:28:55.114 --rc genhtml_function_coverage=1 00:28:55.114 --rc genhtml_legend=1 00:28:55.114 --rc geninfo_all_blocks=1 00:28:55.114 --rc geninfo_unexecuted_blocks=1 00:28:55.114 00:28:55.114 ' 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.114 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:55.115 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:55.115 Cannot find device "nvmf_init_br" 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:55.115 Cannot find device "nvmf_init_br2" 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:55.115 Cannot find device "nvmf_tgt_br" 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:55.115 Cannot find device "nvmf_tgt_br2" 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:55.115 Cannot find device "nvmf_init_br" 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:55.115 Cannot find device "nvmf_init_br2" 00:28:55.115 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:28:55.116 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:55.116 Cannot find device "nvmf_tgt_br" 00:28:55.116 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:28:55.116 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:55.116 Cannot find device "nvmf_tgt_br2" 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:55.374 Cannot find device "nvmf_br" 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:55.374 Cannot find device "nvmf_init_if" 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:55.374 Cannot find device "nvmf_init_if2" 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:55.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:55.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:55.374 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:55.375 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:55.375 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:55.375 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:55.375 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:55.375 17:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:55.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:55.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:28:55.375 00:28:55.375 --- 10.0.0.3 ping statistics --- 00:28:55.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.375 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:55.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:55.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:28:55.375 00:28:55.375 --- 10.0.0.4 ping statistics --- 00:28:55.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.375 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:28:55.375 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:55.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:55.633 00:28:55.633 --- 10.0.0.1 ping statistics --- 00:28:55.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.633 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:55.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:28:55.633 00:28:55.633 --- 10.0.0.2 ping statistics --- 00:28:55.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.633 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=112156 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 112156 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 112156 ']' 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.633 17:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:55.633 [2024-11-20 17:49:37.269950] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:55.633 [2024-11-20 17:49:37.270689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.633 [2024-11-20 17:49:37.413857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.891 [2024-11-20 17:49:37.513589] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.891 [2024-11-20 17:49:37.513662] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.891 [2024-11-20 17:49:37.513674] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.891 [2024-11-20 17:49:37.513682] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.891 [2024-11-20 17:49:37.513690] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.891 [2024-11-20 17:49:37.513727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:56.825 [2024-11-20 17:49:38.385537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.825 [2024-11-20 17:49:38.393658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:28:56.825 null0 00:28:56.825 [2024-11-20 17:49:38.425740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=112206 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 112206 /tmp/host.sock 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 112206 ']' 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.825 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.825 17:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:56.825 [2024-11-20 17:49:38.521831] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:56.825 [2024-11-20 17:49:38.521952] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112206 ] 00:28:57.084 [2024-11-20 17:49:38.662197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.084 [2024-11-20 17:49:38.765232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.023 17:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:58.958 [2024-11-20 17:49:40.639329] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:58.958 [2024-11-20 17:49:40.639391] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:58.958 [2024-11-20 17:49:40.639413] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:58.958 [2024-11-20 17:49:40.725498] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:28:59.216 [2024-11-20 17:49:40.782587] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:59.216 [2024-11-20 17:49:40.782657] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:59.216 [2024-11-20 17:49:40.782686] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:59.217 [2024-11-20 17:49:40.782702] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:59.217 [2024-11-20 17:49:40.782730] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.217 [2024-11-20 17:49:40.787929] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc54a10 was disconnected and freed. delete nvme_qpair. 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:59.217 17:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.152 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.411 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:00.411 17:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:01.346 17:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:01.346 17:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.347 17:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:01.347 17:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:01.347 17:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.347 17:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.347 17:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:01.347 17:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.347 17:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:01.347 17:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:02.282 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.540 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:02.540 17:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:03.538 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:03.539 17:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:04.474 17:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.474 [2024-11-20 17:49:46.220666] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:04.474 [2024-11-20 17:49:46.220736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.474 [2024-11-20 17:49:46.220753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.474 [2024-11-20 17:49:46.220766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.474 [2024-11-20 17:49:46.220776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.474 [2024-11-20 17:49:46.220786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.474 [2024-11-20 17:49:46.220796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.474 [2024-11-20 17:49:46.220806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.474 [2024-11-20 17:49:46.220815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.474 [2024-11-20 17:49:46.220825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.474 [2024-11-20 17:49:46.220834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.474 [2024-11-20 17:49:46.220843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc315c0 is same with the state(6) to be set 00:29:04.474 [2024-11-20 17:49:46.230660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc315c0 (9): Bad file descriptor 00:29:04.474 [2024-11-20 17:49:46.240717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:05.850 [2024-11-20 17:49:47.249166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:05.850 [2024-11-20 17:49:47.249271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc315c0 with addr=10.0.0.3, port=4420 00:29:05.850 [2024-11-20 17:49:47.249302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc315c0 is same with the state(6) to be set 00:29:05.850 [2024-11-20 17:49:47.249402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc315c0 (9): Bad file descriptor 00:29:05.850 [2024-11-20 17:49:47.250191] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:05.850 [2024-11-20 17:49:47.250262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:05.850 [2024-11-20 17:49:47.250283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:05.850 [2024-11-20 17:49:47.250302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:05.850 [2024-11-20 17:49:47.250339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-11-20 17:49:47.250358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:05.850 17:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:06.785 [2024-11-20 17:49:48.250415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:06.785 [2024-11-20 17:49:48.250459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:06.785 [2024-11-20 17:49:48.250471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:06.785 [2024-11-20 17:49:48.250482] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:29:06.785 [2024-11-20 17:49:48.250507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.785 [2024-11-20 17:49:48.250539] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:29:06.785 [2024-11-20 17:49:48.250584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.785 [2024-11-20 17:49:48.250600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.785 [2024-11-20 17:49:48.250616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.785 [2024-11-20 17:49:48.250625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.785 [2024-11-20 17:49:48.250636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.785 [2024-11-20 17:49:48.250646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.785 [2024-11-20 17:49:48.250656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.785 [2024-11-20 17:49:48.250666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.785 [2024-11-20 17:49:48.250677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.785 [2024-11-20 17:49:48.250686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.785 [2024-11-20 17:49:48.250696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:06.785 [2024-11-20 17:49:48.250736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc20d20 (9): Bad file descriptor 00:29:06.786 [2024-11-20 17:49:48.251729] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:06.786 [2024-11-20 17:49:48.251751] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:06.786 17:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:07.748 17:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:08.681 [2024-11-20 17:49:50.261115] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:08.681 [2024-11-20 17:49:50.261181] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:08.681 [2024-11-20 17:49:50.261201] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:08.681 [2024-11-20 17:49:50.347273] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:29:08.681 [2024-11-20 17:49:50.403715] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:08.681 [2024-11-20 17:49:50.403761] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:08.681 [2024-11-20 17:49:50.403784] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:08.681 [2024-11-20 17:49:50.403799] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:29:08.681 [2024-11-20 17:49:50.403807] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:08.681 [2024-11-20 17:49:50.409474] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc2eca0 was disconnected and freed. delete nvme_qpair. 00:29:08.938 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:08.938 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:08.938 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.938 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:08.938 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.938 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 112206 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 112206 ']' 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 112206 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112206 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:08.939 killing process with pid 112206 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112206' 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 112206 00:29:08.939 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 112206 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.196 rmmod nvme_tcp 00:29:09.196 rmmod nvme_fabrics 00:29:09.196 rmmod nvme_keyring 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 112156 ']' 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 112156 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 112156 ']' 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 112156 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112156 00:29:09.196 killing process with pid 112156 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112156' 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 112156 00:29:09.196 17:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 112156 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:09.453 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:29:09.711 00:29:09.711 real 0m14.871s 00:29:09.711 user 0m26.160s 00:29:09.711 sys 0m1.821s 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.711 ************************************ 00:29:09.711 END TEST nvmf_discovery_remove_ifc 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:09.711 ************************************ 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:09.711 17:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.711 ************************************ 00:29:09.711 START TEST nvmf_identify_kernel_target 00:29:09.711 ************************************ 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:09.971 * Looking for test storage... 00:29:09.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.971 --rc genhtml_branch_coverage=1 00:29:09.971 --rc genhtml_function_coverage=1 00:29:09.971 --rc genhtml_legend=1 00:29:09.971 --rc geninfo_all_blocks=1 00:29:09.971 --rc geninfo_unexecuted_blocks=1 00:29:09.971 00:29:09.971 ' 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.971 --rc genhtml_branch_coverage=1 00:29:09.971 --rc genhtml_function_coverage=1 00:29:09.971 --rc genhtml_legend=1 00:29:09.971 --rc geninfo_all_blocks=1 00:29:09.971 --rc geninfo_unexecuted_blocks=1 00:29:09.971 00:29:09.971 ' 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.971 --rc genhtml_branch_coverage=1 00:29:09.971 --rc genhtml_function_coverage=1 00:29:09.971 --rc genhtml_legend=1 00:29:09.971 --rc geninfo_all_blocks=1 00:29:09.971 --rc geninfo_unexecuted_blocks=1 00:29:09.971 00:29:09.971 ' 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:09.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.971 --rc genhtml_branch_coverage=1 00:29:09.971 --rc genhtml_function_coverage=1 00:29:09.971 --rc genhtml_legend=1 00:29:09.971 --rc geninfo_all_blocks=1 00:29:09.971 --rc geninfo_unexecuted_blocks=1 00:29:09.971 00:29:09.971 ' 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.971 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.972 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:09.972 Cannot find device "nvmf_init_br" 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:09.972 Cannot find device "nvmf_init_br2" 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:29:09.972 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:10.231 Cannot find device "nvmf_tgt_br" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:10.231 Cannot find device "nvmf_tgt_br2" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:10.231 Cannot find device "nvmf_init_br" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:10.231 Cannot find device "nvmf_init_br2" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:10.231 Cannot find device "nvmf_tgt_br" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:10.231 Cannot find device "nvmf_tgt_br2" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:10.231 Cannot find device "nvmf_br" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:10.231 Cannot find device "nvmf_init_if" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:10.231 Cannot find device "nvmf_init_if2" 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:10.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:10.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:10.231 17:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:10.231 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:10.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:10.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:29:10.490 00:29:10.490 --- 10.0.0.3 ping statistics --- 00:29:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.490 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:10.490 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:10.490 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:29:10.490 00:29:10.490 --- 10.0.0.4 ping statistics --- 00:29:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.490 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:10.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:29:10.490 00:29:10.490 --- 10.0.0.1 ping statistics --- 00:29:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.490 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:10.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:29:10.490 00:29:10.490 --- 10.0.0.2 ping statistics --- 00:29:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.490 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:10.490 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:10.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:10.749 Waiting for block devices as requested 00:29:11.008 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:11.008 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:11.008 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:11.270 No valid GPT data, bailing 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:11.270 No valid GPT data, bailing 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:11.270 No valid GPT data, bailing 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:11.270 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:29:11.271 17:49:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:11.271 No valid GPT data, bailing 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:29:11.271 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:11.529 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -a 10.0.0.1 -t tcp -s 4420 00:29:11.529 00:29:11.529 Discovery Log Number of Records 2, Generation counter 2 00:29:11.529 =====Discovery Log Entry 0====== 00:29:11.529 trtype: tcp 00:29:11.529 adrfam: ipv4 00:29:11.529 subtype: current discovery subsystem 00:29:11.529 treq: not specified, sq flow control disable supported 00:29:11.529 portid: 1 00:29:11.529 trsvcid: 4420 00:29:11.529 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:11.529 traddr: 10.0.0.1 00:29:11.529 eflags: none 00:29:11.529 sectype: none 00:29:11.529 =====Discovery Log Entry 1====== 00:29:11.529 trtype: tcp 00:29:11.529 adrfam: ipv4 00:29:11.529 subtype: nvme subsystem 00:29:11.529 treq: not specified, sq flow control disable supported 00:29:11.529 portid: 1 00:29:11.529 trsvcid: 4420 00:29:11.529 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:11.529 traddr: 10.0.0.1 00:29:11.529 eflags: none 00:29:11.529 sectype: none 00:29:11.529 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:11.529 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:11.529 ===================================================== 00:29:11.529 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:11.529 ===================================================== 00:29:11.529 Controller Capabilities/Features 00:29:11.529 ================================ 00:29:11.529 Vendor ID: 0000 00:29:11.529 Subsystem Vendor ID: 0000 00:29:11.529 Serial Number: 63685a60e96b43af50d4 00:29:11.529 Model Number: Linux 00:29:11.529 Firmware Version: 6.8.9-20 00:29:11.529 Recommended Arb Burst: 0 00:29:11.529 IEEE OUI Identifier: 00 00 00 00:29:11.529 Multi-path I/O 00:29:11.529 May have multiple subsystem ports: No 00:29:11.529 May have multiple controllers: No 00:29:11.529 Associated with SR-IOV VF: No 00:29:11.529 Max Data Transfer Size: Unlimited 00:29:11.529 Max Number of Namespaces: 0 00:29:11.529 Max Number of I/O Queues: 1024 00:29:11.529 NVMe Specification Version (VS): 1.3 00:29:11.529 NVMe Specification Version (Identify): 1.3 00:29:11.529 Maximum Queue Entries: 1024 00:29:11.529 Contiguous Queues Required: No 00:29:11.529 Arbitration Mechanisms Supported 00:29:11.529 Weighted Round Robin: Not Supported 00:29:11.529 Vendor Specific: Not Supported 00:29:11.529 Reset Timeout: 7500 ms 00:29:11.529 Doorbell Stride: 4 bytes 00:29:11.529 NVM Subsystem Reset: Not Supported 00:29:11.530 Command Sets Supported 00:29:11.530 NVM Command Set: Supported 00:29:11.530 Boot Partition: Not Supported 00:29:11.530 Memory Page Size Minimum: 4096 bytes 00:29:11.530 Memory Page Size Maximum: 4096 bytes 00:29:11.530 Persistent Memory Region: Not Supported 00:29:11.530 Optional Asynchronous Events Supported 00:29:11.530 Namespace Attribute Notices: Not Supported 00:29:11.530 Firmware Activation Notices: Not Supported 00:29:11.530 ANA Change Notices: Not Supported 00:29:11.530 PLE Aggregate Log Change Notices: Not Supported 00:29:11.530 LBA Status Info Alert Notices: Not Supported 00:29:11.530 EGE Aggregate Log Change Notices: Not Supported 00:29:11.530 Normal NVM Subsystem Shutdown event: Not Supported 00:29:11.530 Zone Descriptor Change Notices: Not Supported 00:29:11.530 Discovery Log Change Notices: Supported 00:29:11.530 Controller Attributes 00:29:11.530 128-bit Host Identifier: Not Supported 00:29:11.530 Non-Operational Permissive Mode: Not Supported 00:29:11.530 NVM Sets: Not Supported 00:29:11.530 Read Recovery Levels: Not Supported 00:29:11.530 Endurance Groups: Not Supported 00:29:11.530 Predictable Latency Mode: Not Supported 00:29:11.530 Traffic Based Keep ALive: Not Supported 00:29:11.530 Namespace Granularity: Not Supported 00:29:11.530 SQ Associations: Not Supported 00:29:11.530 UUID List: Not Supported 00:29:11.530 Multi-Domain Subsystem: Not Supported 00:29:11.530 Fixed Capacity Management: Not Supported 00:29:11.530 Variable Capacity Management: Not Supported 00:29:11.530 Delete Endurance Group: Not Supported 00:29:11.530 Delete NVM Set: Not Supported 00:29:11.530 Extended LBA Formats Supported: Not Supported 00:29:11.530 Flexible Data Placement Supported: Not Supported 00:29:11.530 00:29:11.530 Controller Memory Buffer Support 00:29:11.530 ================================ 00:29:11.530 Supported: No 00:29:11.530 00:29:11.530 Persistent Memory Region Support 00:29:11.530 ================================ 00:29:11.530 Supported: No 00:29:11.530 00:29:11.530 Admin Command Set Attributes 00:29:11.530 ============================ 00:29:11.530 Security Send/Receive: Not Supported 00:29:11.530 Format NVM: Not Supported 00:29:11.530 Firmware Activate/Download: Not Supported 00:29:11.530 Namespace Management: Not Supported 00:29:11.530 Device Self-Test: Not Supported 00:29:11.530 Directives: Not Supported 00:29:11.530 NVMe-MI: Not Supported 00:29:11.530 Virtualization Management: Not Supported 00:29:11.530 Doorbell Buffer Config: Not Supported 00:29:11.530 Get LBA Status Capability: Not Supported 00:29:11.530 Command & Feature Lockdown Capability: Not Supported 00:29:11.530 Abort Command Limit: 1 00:29:11.530 Async Event Request Limit: 1 00:29:11.530 Number of Firmware Slots: N/A 00:29:11.530 Firmware Slot 1 Read-Only: N/A 00:29:11.530 Firmware Activation Without Reset: N/A 00:29:11.530 Multiple Update Detection Support: N/A 00:29:11.530 Firmware Update Granularity: No Information Provided 00:29:11.530 Per-Namespace SMART Log: No 00:29:11.530 Asymmetric Namespace Access Log Page: Not Supported 00:29:11.530 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:11.530 Command Effects Log Page: Not Supported 00:29:11.530 Get Log Page Extended Data: Supported 00:29:11.530 Telemetry Log Pages: Not Supported 00:29:11.530 Persistent Event Log Pages: Not Supported 00:29:11.530 Supported Log Pages Log Page: May Support 00:29:11.530 Commands Supported & Effects Log Page: Not Supported 00:29:11.530 Feature Identifiers & Effects Log Page:May Support 00:29:11.530 NVMe-MI Commands & Effects Log Page: May Support 00:29:11.530 Data Area 4 for Telemetry Log: Not Supported 00:29:11.530 Error Log Page Entries Supported: 1 00:29:11.530 Keep Alive: Not Supported 00:29:11.530 00:29:11.530 NVM Command Set Attributes 00:29:11.530 ========================== 00:29:11.530 Submission Queue Entry Size 00:29:11.530 Max: 1 00:29:11.530 Min: 1 00:29:11.530 Completion Queue Entry Size 00:29:11.530 Max: 1 00:29:11.530 Min: 1 00:29:11.530 Number of Namespaces: 0 00:29:11.530 Compare Command: Not Supported 00:29:11.530 Write Uncorrectable Command: Not Supported 00:29:11.530 Dataset Management Command: Not Supported 00:29:11.530 Write Zeroes Command: Not Supported 00:29:11.530 Set Features Save Field: Not Supported 00:29:11.530 Reservations: Not Supported 00:29:11.530 Timestamp: Not Supported 00:29:11.530 Copy: Not Supported 00:29:11.530 Volatile Write Cache: Not Present 00:29:11.530 Atomic Write Unit (Normal): 1 00:29:11.530 Atomic Write Unit (PFail): 1 00:29:11.530 Atomic Compare & Write Unit: 1 00:29:11.530 Fused Compare & Write: Not Supported 00:29:11.530 Scatter-Gather List 00:29:11.530 SGL Command Set: Supported 00:29:11.530 SGL Keyed: Not Supported 00:29:11.530 SGL Bit Bucket Descriptor: Not Supported 00:29:11.530 SGL Metadata Pointer: Not Supported 00:29:11.530 Oversized SGL: Not Supported 00:29:11.530 SGL Metadata Address: Not Supported 00:29:11.530 SGL Offset: Supported 00:29:11.530 Transport SGL Data Block: Not Supported 00:29:11.530 Replay Protected Memory Block: Not Supported 00:29:11.530 00:29:11.530 Firmware Slot Information 00:29:11.530 ========================= 00:29:11.530 Active slot: 0 00:29:11.530 00:29:11.530 00:29:11.530 Error Log 00:29:11.530 ========= 00:29:11.530 00:29:11.530 Active Namespaces 00:29:11.530 ================= 00:29:11.530 Discovery Log Page 00:29:11.530 ================== 00:29:11.530 Generation Counter: 2 00:29:11.530 Number of Records: 2 00:29:11.530 Record Format: 0 00:29:11.530 00:29:11.530 Discovery Log Entry 0 00:29:11.530 ---------------------- 00:29:11.530 Transport Type: 3 (TCP) 00:29:11.530 Address Family: 1 (IPv4) 00:29:11.530 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:11.530 Entry Flags: 00:29:11.530 Duplicate Returned Information: 0 00:29:11.530 Explicit Persistent Connection Support for Discovery: 0 00:29:11.530 Transport Requirements: 00:29:11.530 Secure Channel: Not Specified 00:29:11.530 Port ID: 1 (0x0001) 00:29:11.530 Controller ID: 65535 (0xffff) 00:29:11.530 Admin Max SQ Size: 32 00:29:11.530 Transport Service Identifier: 4420 00:29:11.530 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:11.530 Transport Address: 10.0.0.1 00:29:11.530 Discovery Log Entry 1 00:29:11.530 ---------------------- 00:29:11.530 Transport Type: 3 (TCP) 00:29:11.530 Address Family: 1 (IPv4) 00:29:11.530 Subsystem Type: 2 (NVM Subsystem) 00:29:11.530 Entry Flags: 00:29:11.530 Duplicate Returned Information: 0 00:29:11.530 Explicit Persistent Connection Support for Discovery: 0 00:29:11.530 Transport Requirements: 00:29:11.530 Secure Channel: Not Specified 00:29:11.530 Port ID: 1 (0x0001) 00:29:11.530 Controller ID: 65535 (0xffff) 00:29:11.530 Admin Max SQ Size: 32 00:29:11.530 Transport Service Identifier: 4420 00:29:11.530 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:11.530 Transport Address: 10.0.0.1 00:29:11.530 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.789 get_feature(0x01) failed 00:29:11.789 get_feature(0x02) failed 00:29:11.789 get_feature(0x04) failed 00:29:11.789 ===================================================== 00:29:11.789 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:11.789 ===================================================== 00:29:11.789 Controller Capabilities/Features 00:29:11.789 ================================ 00:29:11.789 Vendor ID: 0000 00:29:11.789 Subsystem Vendor ID: 0000 00:29:11.789 Serial Number: d06ce4ceed354da1907f 00:29:11.789 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:11.789 Firmware Version: 6.8.9-20 00:29:11.789 Recommended Arb Burst: 6 00:29:11.789 IEEE OUI Identifier: 00 00 00 00:29:11.789 Multi-path I/O 00:29:11.789 May have multiple subsystem ports: Yes 00:29:11.789 May have multiple controllers: Yes 00:29:11.789 Associated with SR-IOV VF: No 00:29:11.789 Max Data Transfer Size: Unlimited 00:29:11.789 Max Number of Namespaces: 1024 00:29:11.789 Max Number of I/O Queues: 128 00:29:11.789 NVMe Specification Version (VS): 1.3 00:29:11.789 NVMe Specification Version (Identify): 1.3 00:29:11.789 Maximum Queue Entries: 1024 00:29:11.789 Contiguous Queues Required: No 00:29:11.789 Arbitration Mechanisms Supported 00:29:11.789 Weighted Round Robin: Not Supported 00:29:11.789 Vendor Specific: Not Supported 00:29:11.789 Reset Timeout: 7500 ms 00:29:11.789 Doorbell Stride: 4 bytes 00:29:11.789 NVM Subsystem Reset: Not Supported 00:29:11.789 Command Sets Supported 00:29:11.789 NVM Command Set: Supported 00:29:11.789 Boot Partition: Not Supported 00:29:11.789 Memory Page Size Minimum: 4096 bytes 00:29:11.789 Memory Page Size Maximum: 4096 bytes 00:29:11.789 Persistent Memory Region: Not Supported 00:29:11.789 Optional Asynchronous Events Supported 00:29:11.789 Namespace Attribute Notices: Supported 00:29:11.789 Firmware Activation Notices: Not Supported 00:29:11.789 ANA Change Notices: Supported 00:29:11.789 PLE Aggregate Log Change Notices: Not Supported 00:29:11.789 LBA Status Info Alert Notices: Not Supported 00:29:11.789 EGE Aggregate Log Change Notices: Not Supported 00:29:11.789 Normal NVM Subsystem Shutdown event: Not Supported 00:29:11.789 Zone Descriptor Change Notices: Not Supported 00:29:11.789 Discovery Log Change Notices: Not Supported 00:29:11.789 Controller Attributes 00:29:11.789 128-bit Host Identifier: Supported 00:29:11.789 Non-Operational Permissive Mode: Not Supported 00:29:11.789 NVM Sets: Not Supported 00:29:11.789 Read Recovery Levels: Not Supported 00:29:11.789 Endurance Groups: Not Supported 00:29:11.789 Predictable Latency Mode: Not Supported 00:29:11.789 Traffic Based Keep ALive: Supported 00:29:11.789 Namespace Granularity: Not Supported 00:29:11.789 SQ Associations: Not Supported 00:29:11.789 UUID List: Not Supported 00:29:11.789 Multi-Domain Subsystem: Not Supported 00:29:11.789 Fixed Capacity Management: Not Supported 00:29:11.789 Variable Capacity Management: Not Supported 00:29:11.789 Delete Endurance Group: Not Supported 00:29:11.789 Delete NVM Set: Not Supported 00:29:11.789 Extended LBA Formats Supported: Not Supported 00:29:11.789 Flexible Data Placement Supported: Not Supported 00:29:11.789 00:29:11.789 Controller Memory Buffer Support 00:29:11.789 ================================ 00:29:11.789 Supported: No 00:29:11.789 00:29:11.789 Persistent Memory Region Support 00:29:11.789 ================================ 00:29:11.789 Supported: No 00:29:11.789 00:29:11.789 Admin Command Set Attributes 00:29:11.789 ============================ 00:29:11.789 Security Send/Receive: Not Supported 00:29:11.789 Format NVM: Not Supported 00:29:11.789 Firmware Activate/Download: Not Supported 00:29:11.789 Namespace Management: Not Supported 00:29:11.789 Device Self-Test: Not Supported 00:29:11.789 Directives: Not Supported 00:29:11.789 NVMe-MI: Not Supported 00:29:11.789 Virtualization Management: Not Supported 00:29:11.789 Doorbell Buffer Config: Not Supported 00:29:11.789 Get LBA Status Capability: Not Supported 00:29:11.789 Command & Feature Lockdown Capability: Not Supported 00:29:11.789 Abort Command Limit: 4 00:29:11.789 Async Event Request Limit: 4 00:29:11.789 Number of Firmware Slots: N/A 00:29:11.789 Firmware Slot 1 Read-Only: N/A 00:29:11.789 Firmware Activation Without Reset: N/A 00:29:11.789 Multiple Update Detection Support: N/A 00:29:11.789 Firmware Update Granularity: No Information Provided 00:29:11.789 Per-Namespace SMART Log: Yes 00:29:11.789 Asymmetric Namespace Access Log Page: Supported 00:29:11.789 ANA Transition Time : 10 sec 00:29:11.789 00:29:11.789 Asymmetric Namespace Access Capabilities 00:29:11.789 ANA Optimized State : Supported 00:29:11.789 ANA Non-Optimized State : Supported 00:29:11.789 ANA Inaccessible State : Supported 00:29:11.789 ANA Persistent Loss State : Supported 00:29:11.789 ANA Change State : Supported 00:29:11.789 ANAGRPID is not changed : No 00:29:11.789 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:11.789 00:29:11.789 ANA Group Identifier Maximum : 128 00:29:11.789 Number of ANA Group Identifiers : 128 00:29:11.789 Max Number of Allowed Namespaces : 1024 00:29:11.789 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:11.789 Command Effects Log Page: Supported 00:29:11.789 Get Log Page Extended Data: Supported 00:29:11.789 Telemetry Log Pages: Not Supported 00:29:11.789 Persistent Event Log Pages: Not Supported 00:29:11.789 Supported Log Pages Log Page: May Support 00:29:11.789 Commands Supported & Effects Log Page: Not Supported 00:29:11.789 Feature Identifiers & Effects Log Page:May Support 00:29:11.789 NVMe-MI Commands & Effects Log Page: May Support 00:29:11.789 Data Area 4 for Telemetry Log: Not Supported 00:29:11.789 Error Log Page Entries Supported: 128 00:29:11.789 Keep Alive: Supported 00:29:11.789 Keep Alive Granularity: 1000 ms 00:29:11.789 00:29:11.789 NVM Command Set Attributes 00:29:11.789 ========================== 00:29:11.790 Submission Queue Entry Size 00:29:11.790 Max: 64 00:29:11.790 Min: 64 00:29:11.790 Completion Queue Entry Size 00:29:11.790 Max: 16 00:29:11.790 Min: 16 00:29:11.790 Number of Namespaces: 1024 00:29:11.790 Compare Command: Not Supported 00:29:11.790 Write Uncorrectable Command: Not Supported 00:29:11.790 Dataset Management Command: Supported 00:29:11.790 Write Zeroes Command: Supported 00:29:11.790 Set Features Save Field: Not Supported 00:29:11.790 Reservations: Not Supported 00:29:11.790 Timestamp: Not Supported 00:29:11.790 Copy: Not Supported 00:29:11.790 Volatile Write Cache: Present 00:29:11.790 Atomic Write Unit (Normal): 1 00:29:11.790 Atomic Write Unit (PFail): 1 00:29:11.790 Atomic Compare & Write Unit: 1 00:29:11.790 Fused Compare & Write: Not Supported 00:29:11.790 Scatter-Gather List 00:29:11.790 SGL Command Set: Supported 00:29:11.790 SGL Keyed: Not Supported 00:29:11.790 SGL Bit Bucket Descriptor: Not Supported 00:29:11.790 SGL Metadata Pointer: Not Supported 00:29:11.790 Oversized SGL: Not Supported 00:29:11.790 SGL Metadata Address: Not Supported 00:29:11.790 SGL Offset: Supported 00:29:11.790 Transport SGL Data Block: Not Supported 00:29:11.790 Replay Protected Memory Block: Not Supported 00:29:11.790 00:29:11.790 Firmware Slot Information 00:29:11.790 ========================= 00:29:11.790 Active slot: 0 00:29:11.790 00:29:11.790 Asymmetric Namespace Access 00:29:11.790 =========================== 00:29:11.790 Change Count : 0 00:29:11.790 Number of ANA Group Descriptors : 1 00:29:11.790 ANA Group Descriptor : 0 00:29:11.790 ANA Group ID : 1 00:29:11.790 Number of NSID Values : 1 00:29:11.790 Change Count : 0 00:29:11.790 ANA State : 1 00:29:11.790 Namespace Identifier : 1 00:29:11.790 00:29:11.790 Commands Supported and Effects 00:29:11.790 ============================== 00:29:11.790 Admin Commands 00:29:11.790 -------------- 00:29:11.790 Get Log Page (02h): Supported 00:29:11.790 Identify (06h): Supported 00:29:11.790 Abort (08h): Supported 00:29:11.790 Set Features (09h): Supported 00:29:11.790 Get Features (0Ah): Supported 00:29:11.790 Asynchronous Event Request (0Ch): Supported 00:29:11.790 Keep Alive (18h): Supported 00:29:11.790 I/O Commands 00:29:11.790 ------------ 00:29:11.790 Flush (00h): Supported 00:29:11.790 Write (01h): Supported LBA-Change 00:29:11.790 Read (02h): Supported 00:29:11.790 Write Zeroes (08h): Supported LBA-Change 00:29:11.790 Dataset Management (09h): Supported 00:29:11.790 00:29:11.790 Error Log 00:29:11.790 ========= 00:29:11.790 Entry: 0 00:29:11.790 Error Count: 0x3 00:29:11.790 Submission Queue Id: 0x0 00:29:11.790 Command Id: 0x5 00:29:11.790 Phase Bit: 0 00:29:11.790 Status Code: 0x2 00:29:11.790 Status Code Type: 0x0 00:29:11.790 Do Not Retry: 1 00:29:11.790 Error Location: 0x28 00:29:11.790 LBA: 0x0 00:29:11.790 Namespace: 0x0 00:29:11.790 Vendor Log Page: 0x0 00:29:11.790 ----------- 00:29:11.790 Entry: 1 00:29:11.790 Error Count: 0x2 00:29:11.790 Submission Queue Id: 0x0 00:29:11.790 Command Id: 0x5 00:29:11.790 Phase Bit: 0 00:29:11.790 Status Code: 0x2 00:29:11.790 Status Code Type: 0x0 00:29:11.790 Do Not Retry: 1 00:29:11.790 Error Location: 0x28 00:29:11.790 LBA: 0x0 00:29:11.790 Namespace: 0x0 00:29:11.790 Vendor Log Page: 0x0 00:29:11.790 ----------- 00:29:11.790 Entry: 2 00:29:11.790 Error Count: 0x1 00:29:11.790 Submission Queue Id: 0x0 00:29:11.790 Command Id: 0x4 00:29:11.790 Phase Bit: 0 00:29:11.790 Status Code: 0x2 00:29:11.790 Status Code Type: 0x0 00:29:11.790 Do Not Retry: 1 00:29:11.790 Error Location: 0x28 00:29:11.790 LBA: 0x0 00:29:11.790 Namespace: 0x0 00:29:11.790 Vendor Log Page: 0x0 00:29:11.790 00:29:11.790 Number of Queues 00:29:11.790 ================ 00:29:11.790 Number of I/O Submission Queues: 128 00:29:11.790 Number of I/O Completion Queues: 128 00:29:11.790 00:29:11.790 ZNS Specific Controller Data 00:29:11.790 ============================ 00:29:11.790 Zone Append Size Limit: 0 00:29:11.790 00:29:11.790 00:29:11.790 Active Namespaces 00:29:11.790 ================= 00:29:11.790 get_feature(0x05) failed 00:29:11.790 Namespace ID:1 00:29:11.790 Command Set Identifier: NVM (00h) 00:29:11.790 Deallocate: Supported 00:29:11.790 Deallocated/Unwritten Error: Not Supported 00:29:11.790 Deallocated Read Value: Unknown 00:29:11.790 Deallocate in Write Zeroes: Not Supported 00:29:11.790 Deallocated Guard Field: 0xFFFF 00:29:11.790 Flush: Supported 00:29:11.790 Reservation: Not Supported 00:29:11.790 Namespace Sharing Capabilities: Multiple Controllers 00:29:11.790 Size (in LBAs): 1310720 (5GiB) 00:29:11.790 Capacity (in LBAs): 1310720 (5GiB) 00:29:11.790 Utilization (in LBAs): 1310720 (5GiB) 00:29:11.790 UUID: 19c85812-6306-425e-847b-fe8b9f33323d 00:29:11.790 Thin Provisioning: Not Supported 00:29:11.790 Per-NS Atomic Units: Yes 00:29:11.790 Atomic Boundary Size (Normal): 0 00:29:11.790 Atomic Boundary Size (PFail): 0 00:29:11.790 Atomic Boundary Offset: 0 00:29:11.790 NGUID/EUI64 Never Reused: No 00:29:11.790 ANA group ID: 1 00:29:11.790 Namespace Write Protected: No 00:29:11.790 Number of LBA Formats: 1 00:29:11.790 Current LBA Format: LBA Format #00 00:29:11.790 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:29:11.790 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.790 rmmod nvme_tcp 00:29:11.790 rmmod nvme_fabrics 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:11.790 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:29:12.049 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:29:12.307 17:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:12.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:12.873 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:12.873 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:13.132 00:29:13.132 real 0m3.210s 00:29:13.132 user 0m1.100s 00:29:13.132 sys 0m1.525s 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.132 ************************************ 00:29:13.132 END TEST nvmf_identify_kernel_target 00:29:13.132 ************************************ 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.132 ************************************ 00:29:13.132 START TEST nvmf_auth_host 00:29:13.132 ************************************ 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:13.132 * Looking for test storage... 00:29:13.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:13.132 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:29:13.391 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:13.391 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.391 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.391 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.391 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.392 --rc genhtml_branch_coverage=1 00:29:13.392 --rc genhtml_function_coverage=1 00:29:13.392 --rc genhtml_legend=1 00:29:13.392 --rc geninfo_all_blocks=1 00:29:13.392 --rc geninfo_unexecuted_blocks=1 00:29:13.392 00:29:13.392 ' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.392 --rc genhtml_branch_coverage=1 00:29:13.392 --rc genhtml_function_coverage=1 00:29:13.392 --rc genhtml_legend=1 00:29:13.392 --rc geninfo_all_blocks=1 00:29:13.392 --rc geninfo_unexecuted_blocks=1 00:29:13.392 00:29:13.392 ' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.392 --rc genhtml_branch_coverage=1 00:29:13.392 --rc genhtml_function_coverage=1 00:29:13.392 --rc genhtml_legend=1 00:29:13.392 --rc geninfo_all_blocks=1 00:29:13.392 --rc geninfo_unexecuted_blocks=1 00:29:13.392 00:29:13.392 ' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.392 --rc genhtml_branch_coverage=1 00:29:13.392 --rc genhtml_function_coverage=1 00:29:13.392 --rc genhtml_legend=1 00:29:13.392 --rc geninfo_all_blocks=1 00:29:13.392 --rc geninfo_unexecuted_blocks=1 00:29:13.392 00:29:13.392 ' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:13.392 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:13.392 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:13.393 Cannot find device "nvmf_init_br" 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:29:13.393 17:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:13.393 Cannot find device "nvmf_init_br2" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:13.393 Cannot find device "nvmf_tgt_br" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:13.393 Cannot find device "nvmf_tgt_br2" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:13.393 Cannot find device "nvmf_init_br" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:13.393 Cannot find device "nvmf_init_br2" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:13.393 Cannot find device "nvmf_tgt_br" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:13.393 Cannot find device "nvmf_tgt_br2" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:13.393 Cannot find device "nvmf_br" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:13.393 Cannot find device "nvmf_init_if" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:13.393 Cannot find device "nvmf_init_if2" 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:13.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:13.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:13.393 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:13.652 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:13.652 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:29:13.652 00:29:13.652 --- 10.0.0.3 ping statistics --- 00:29:13.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.652 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:13.652 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:13.652 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:29:13.652 00:29:13.652 --- 10.0.0.4 ping statistics --- 00:29:13.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.652 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:13.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:29:13.652 00:29:13.652 --- 10.0.0.1 ping statistics --- 00:29:13.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.652 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:13.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:29:13.652 00:29:13.652 --- 10.0.0.2 ping statistics --- 00:29:13.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.652 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=113221 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 113221 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 113221 ']' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.652 17:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=af44f33d929756e9612de6777b99b92d 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.RWT 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key af44f33d929756e9612de6777b99b92d 0 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 af44f33d929756e9612de6777b99b92d 0 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=af44f33d929756e9612de6777b99b92d 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.RWT 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.RWT 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RWT 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=e65e2bd216f206a6d3550cfcabe6e5a09da6b02b9c639dc99bf39e8ce87cedaf 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.fJK 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key e65e2bd216f206a6d3550cfcabe6e5a09da6b02b9c639dc99bf39e8ce87cedaf 3 00:29:15.027 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 e65e2bd216f206a6d3550cfcabe6e5a09da6b02b9c639dc99bf39e8ce87cedaf 3 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=e65e2bd216f206a6d3550cfcabe6e5a09da6b02b9c639dc99bf39e8ce87cedaf 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.fJK 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.fJK 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fJK 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4d726f5cce601afc7cfbb0d6807a3da6aadc2822ab40a67b 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.9XA 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4d726f5cce601afc7cfbb0d6807a3da6aadc2822ab40a67b 0 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4d726f5cce601afc7cfbb0d6807a3da6aadc2822ab40a67b 0 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4d726f5cce601afc7cfbb0d6807a3da6aadc2822ab40a67b 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.9XA 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.9XA 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9XA 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ea4f8dc6381c0ec11ca8c939c93c898237de8c0f5a972025 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.zJI 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ea4f8dc6381c0ec11ca8c939c93c898237de8c0f5a972025 2 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ea4f8dc6381c0ec11ca8c939c93c898237de8c0f5a972025 2 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ea4f8dc6381c0ec11ca8c939c93c898237de8c0f5a972025 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:29:15.028 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.zJI 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.zJI 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zJI 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7e871111ce5ea80e92e3214121c1d5a2 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.kHi 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7e871111ce5ea80e92e3214121c1d5a2 1 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7e871111ce5ea80e92e3214121c1d5a2 1 00:29:15.286 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7e871111ce5ea80e92e3214121c1d5a2 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.kHi 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.kHi 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kHi 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d809770f623e50fdbd59c44692ac3a6d 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.ZmS 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d809770f623e50fdbd59c44692ac3a6d 1 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d809770f623e50fdbd59c44692ac3a6d 1 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d809770f623e50fdbd59c44692ac3a6d 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.ZmS 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.ZmS 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZmS 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=a33294fb13a7fc66523208fd46faf3aa6516d270e1153ee4 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.UcT 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key a33294fb13a7fc66523208fd46faf3aa6516d270e1153ee4 2 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 a33294fb13a7fc66523208fd46faf3aa6516d270e1153ee4 2 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=a33294fb13a7fc66523208fd46faf3aa6516d270e1153ee4 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:29:15.287 17:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.UcT 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.UcT 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.UcT 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6bb62106626800ee5246374672133181 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.UOY 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6bb62106626800ee5246374672133181 0 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6bb62106626800ee5246374672133181 0 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6bb62106626800ee5246374672133181 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:29:15.287 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.UOY 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.UOY 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UOY 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=220c2055e930e1f79be3e60e69a6890cae86da216d2c5a778b677ddc00e888b4 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.fDv 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 220c2055e930e1f79be3e60e69a6890cae86da216d2c5a778b677ddc00e888b4 3 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 220c2055e930e1f79be3e60e69a6890cae86da216d2c5a778b677ddc00e888b4 3 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=220c2055e930e1f79be3e60e69a6890cae86da216d2c5a778b677ddc00e888b4 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.fDv 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.fDv 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.fDv 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 113221 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 113221 ']' 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:15.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.547 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.548 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:15.548 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RWT 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fJK ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fJK 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9XA 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zJI ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zJI 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kHi 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZmS ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZmS 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.UcT 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UOY ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UOY 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fDv 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:15.806 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:16.065 17:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:16.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:16.324 Waiting for block devices as requested 00:29:16.324 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:16.583 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:17.151 No valid GPT data, bailing 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:17.151 No valid GPT data, bailing 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:29:17.151 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:17.409 No valid GPT data, bailing 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:29:17.409 17:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:17.409 No valid GPT data, bailing 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:29:17.409 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -a 10.0.0.1 -t tcp -s 4420 00:29:17.410 00:29:17.410 Discovery Log Number of Records 2, Generation counter 2 00:29:17.410 =====Discovery Log Entry 0====== 00:29:17.410 trtype: tcp 00:29:17.410 adrfam: ipv4 00:29:17.410 subtype: current discovery subsystem 00:29:17.410 treq: not specified, sq flow control disable supported 00:29:17.410 portid: 1 00:29:17.410 trsvcid: 4420 00:29:17.410 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:17.410 traddr: 10.0.0.1 00:29:17.410 eflags: none 00:29:17.410 sectype: none 00:29:17.410 =====Discovery Log Entry 1====== 00:29:17.410 trtype: tcp 00:29:17.410 adrfam: ipv4 00:29:17.410 subtype: nvme subsystem 00:29:17.410 treq: not specified, sq flow control disable supported 00:29:17.410 portid: 1 00:29:17.410 trsvcid: 4420 00:29:17.410 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:17.410 traddr: 10.0.0.1 00:29:17.410 eflags: none 00:29:17.410 sectype: none 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.410 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.669 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.670 nvme0n1 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:17.670 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.929 nvme0n1 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:17.929 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.930 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.189 nvme0n1 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.189 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.190 nvme0n1 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.190 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.448 17:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.448 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.448 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.449 nvme0n1 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.449 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.707 nvme0n1 00:29:18.707 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.707 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.708 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:18.966 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.967 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.967 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.225 nvme0n1 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:19.225 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.226 17:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.518 nvme0n1 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.518 nvme0n1 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.518 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.804 nvme0n1 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.804 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.805 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.063 nvme0n1 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.063 17:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.629 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.630 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.888 nvme0n1 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.888 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.147 nvme0n1 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.147 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.406 17:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.406 nvme0n1 00:29:21.406 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.406 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.406 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.406 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.406 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.406 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.665 nvme0n1 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.665 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.924 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.925 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.184 nvme0n1 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.184 17:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.131 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.390 nvme0n1 00:29:24.390 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.390 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.390 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.390 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.390 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.390 17:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.390 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.649 nvme0n1 00:29:24.649 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.649 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.649 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.649 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.649 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.649 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.907 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.166 nvme0n1 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.166 17:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.733 nvme0n1 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.733 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.992 nvme0n1 00:29:25.992 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.992 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.992 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.992 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.992 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.992 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.250 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.251 17:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.819 nvme0n1 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:26.819 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.820 17:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.433 nvme0n1 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.433 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.434 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.999 nvme0n1 00:29:27.999 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.999 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.999 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.999 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.999 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.258 17:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.826 nvme0n1 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.826 17:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.761 nvme0n1 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.761 nvme0n1 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:29.761 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.762 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.021 nvme0n1 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.021 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.022 nvme0n1 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.022 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:30.281 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.282 nvme0n1 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.282 17:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.282 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.541 nvme0n1 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.541 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.542 nvme0n1 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.542 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.800 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.800 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.800 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.800 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.801 nvme0n1 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.801 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 nvme0n1 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.061 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.062 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.321 nvme0n1 00:29:31.321 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.321 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.321 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.321 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.321 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.321 17:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.321 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.596 nvme0n1 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:31.596 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.597 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.870 nvme0n1 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.871 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.130 nvme0n1 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.130 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.389 nvme0n1 00:29:32.389 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.389 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.389 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.389 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.389 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.389 17:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.389 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.390 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.390 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.390 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.390 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.390 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.648 nvme0n1 00:29:32.648 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.648 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.648 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.648 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.648 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.649 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.906 nvme0n1 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.906 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.472 nvme0n1 00:29:33.472 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.472 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.472 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.472 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.472 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.473 17:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.473 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.732 nvme0n1 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.732 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.298 nvme0n1 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.298 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.299 17:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.557 nvme0n1 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.557 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.816 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.075 nvme0n1 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:35.075 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.076 17:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 nvme0n1 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:36.011 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.012 17:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.579 nvme0n1 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.579 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.145 nvme0n1 00:29:37.145 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.145 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.404 17:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.404 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.969 nvme0n1 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.969 17:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.535 nvme0n1 00:29:38.535 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.535 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.535 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.535 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.535 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.794 nvme0n1 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.794 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.795 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.053 nvme0n1 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:39.053 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.054 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.312 nvme0n1 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:39.312 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.313 17:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.313 nvme0n1 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.313 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.572 nvme0n1 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.572 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.831 nvme0n1 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.831 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.091 nvme0n1 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.091 nvme0n1 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.091 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:40.350 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:40.351 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.351 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.351 17:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.351 nvme0n1 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.351 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.609 nvme0n1 00:29:40.609 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.610 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.868 nvme0n1 00:29:40.868 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.868 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.868 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.868 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.868 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.869 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.127 nvme0n1 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.127 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.128 17:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.386 nvme0n1 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:41.386 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.387 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 nvme0n1 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.646 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.905 nvme0n1 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.905 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.164 17:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.423 nvme0n1 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.423 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.989 nvme0n1 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:42.989 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:42.990 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.990 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.990 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.264 nvme0n1 00:29:43.264 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.264 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.264 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.264 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.264 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.264 17:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.264 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:43.265 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:43.265 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:43.265 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.265 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.265 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.831 nvme0n1 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:43.831 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.832 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.090 nvme0n1 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.090 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY0NGYzM2Q5Mjk3NTZlOTYxMmRlNjc3N2I5OWI5MmSD6oMA: 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: ]] 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTY1ZTJiZDIxNmYyMDZhNmQzNTUwY2ZjYWJlNmU1YTA5ZGE2YjAyYjljNjM5ZGM5OWJmMzllOGNlODdjZWRhZszdVho=: 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.384 17:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.965 nvme0n1 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:44.965 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:44.966 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.966 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.966 17:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.531 nvme0n1 00:29:45.532 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.532 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.532 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.532 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.532 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.533 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.791 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.791 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:45.791 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:45.791 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.792 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.359 nvme0n1 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTMzMjk0ZmIxM2E3ZmM2NjUyMzIwOGZkNDZmYWYzYWE2NTE2ZDI3MGUxMTUzZWU0IvPcHQ==: 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmJiNjIxMDY2MjY4MDBlZTUyNDYzNzQ2NzIxMzMxODGV062k: 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.359 17:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.925 nvme0n1 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjIwYzIwNTVlOTMwZTFmNzliZTNlNjBlNjlhNjg5MGNhZTg2ZGEyMTZkMmM1YTc3OGI2NzdkZGMwMGU4ODhiND5WAKg=: 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.925 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.926 17:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.860 nvme0n1 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:47.860 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.861 2024/11/20 17:50:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:47.861 request: 00:29:47.861 { 00:29:47.861 "method": "bdev_nvme_attach_controller", 00:29:47.861 "params": { 00:29:47.861 "name": "nvme0", 00:29:47.861 "trtype": "tcp", 00:29:47.861 "traddr": "10.0.0.1", 00:29:47.861 "adrfam": "ipv4", 00:29:47.861 "trsvcid": "4420", 00:29:47.861 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:47.861 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:47.861 "prchk_reftag": false, 00:29:47.861 "prchk_guard": false, 00:29:47.861 "hdgst": false, 00:29:47.861 "ddgst": false, 00:29:47.861 "allow_unrecognized_csi": false 00:29:47.861 } 00:29:47.861 } 00:29:47.861 Got JSON-RPC error response 00:29:47.861 GoRPCClient: error on JSON-RPC call 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.861 2024/11/20 17:50:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:47.861 request: 00:29:47.861 { 00:29:47.861 "method": "bdev_nvme_attach_controller", 00:29:47.861 "params": { 00:29:47.861 "name": "nvme0", 00:29:47.861 "trtype": "tcp", 00:29:47.861 "traddr": "10.0.0.1", 00:29:47.861 "adrfam": "ipv4", 00:29:47.861 "trsvcid": "4420", 00:29:47.861 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:47.861 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:47.861 "prchk_reftag": false, 00:29:47.861 "prchk_guard": false, 00:29:47.861 "hdgst": false, 00:29:47.861 "ddgst": false, 00:29:47.861 "dhchap_key": "key2", 00:29:47.861 "allow_unrecognized_csi": false 00:29:47.861 } 00:29:47.861 } 00:29:47.861 Got JSON-RPC error response 00:29:47.861 GoRPCClient: error on JSON-RPC call 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:47.861 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.862 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:47.862 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.862 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.120 2024/11/20 17:50:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:48.120 request: 00:29:48.120 { 00:29:48.120 "method": "bdev_nvme_attach_controller", 00:29:48.120 "params": { 00:29:48.120 "name": "nvme0", 00:29:48.120 "trtype": "tcp", 00:29:48.120 "traddr": "10.0.0.1", 00:29:48.120 "adrfam": "ipv4", 00:29:48.120 "trsvcid": "4420", 00:29:48.120 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:48.120 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:48.120 "prchk_reftag": false, 00:29:48.120 "prchk_guard": false, 00:29:48.120 "hdgst": false, 00:29:48.120 "ddgst": false, 00:29:48.120 "dhchap_key": "key1", 00:29:48.120 "dhchap_ctrlr_key": "ckey2", 00:29:48.120 "allow_unrecognized_csi": false 00:29:48.120 } 00:29:48.120 } 00:29:48.120 Got JSON-RPC error response 00:29:48.120 GoRPCClient: error on JSON-RPC call 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.120 nvme0n1 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.120 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.121 2024/11/20 17:50:29 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:29:48.121 request: 00:29:48.121 { 00:29:48.121 "method": "bdev_nvme_set_keys", 00:29:48.121 "params": { 00:29:48.121 "name": "nvme0", 00:29:48.121 "dhchap_key": "key1", 00:29:48.121 "dhchap_ctrlr_key": "ckey2" 00:29:48.121 } 00:29:48.121 } 00:29:48.121 Got JSON-RPC error response 00:29:48.121 GoRPCClient: error on JSON-RPC call 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.121 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.379 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.379 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:48.379 17:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:49.312 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ3MjZmNWNjZTYwMWFmYzdjZmJiMGQ2ODA3YTNkYTZhYWRjMjgyMmFiNDBhNjdickFpAQ==: 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: ]] 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0ZjhkYzYzODFjMGVjMTFjYThjOTM5YzkzYzg5ODIzN2RlOGMwZjVhOTcyMDI1FAj5dg==: 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.313 17:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.313 nvme0n1 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4NzExMTFjZTVlYTgwZTkyZTMyMTQxMjFjMWQ1YTL92I7P: 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: ]] 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgwOTc3MGY2MjNlNTBmZGJkNTljNDQ2OTJhYzNhNmQsmm4/: 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.313 2024/11/20 17:50:31 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:29:49.313 request: 00:29:49.313 { 00:29:49.313 "method": "bdev_nvme_set_keys", 00:29:49.313 "params": { 00:29:49.313 "name": "nvme0", 00:29:49.313 "dhchap_key": "key2", 00:29:49.313 "dhchap_ctrlr_key": "ckey1" 00:29:49.313 } 00:29:49.313 } 00:29:49.313 Got JSON-RPC error response 00:29:49.313 GoRPCClient: error on JSON-RPC call 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:49.313 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:49.571 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.571 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.571 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.571 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:49.571 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.571 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:49.571 17:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.506 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.507 rmmod nvme_tcp 00:29:50.507 rmmod nvme_fabrics 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 113221 ']' 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 113221 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 113221 ']' 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 113221 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113221 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:50.507 killing process with pid 113221 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113221' 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 113221 00:29:50.507 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 113221 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:50.765 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:29:51.023 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:29:51.282 17:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:51.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:51.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:52.106 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:52.106 17:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RWT /tmp/spdk.key-null.9XA /tmp/spdk.key-sha256.kHi /tmp/spdk.key-sha384.UcT /tmp/spdk.key-sha512.fDv /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:29:52.106 17:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:52.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:52.364 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:52.364 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:52.364 00:29:52.364 real 0m39.352s 00:29:52.364 user 0m35.340s 00:29:52.364 sys 0m4.219s 00:29:52.364 17:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.364 17:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.364 ************************************ 00:29:52.364 END TEST nvmf_auth_host 00:29:52.364 ************************************ 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.726 ************************************ 00:29:52.726 START TEST nvmf_digest 00:29:52.726 ************************************ 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:52.726 * Looking for test storage... 00:29:52.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.726 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:52.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.727 --rc genhtml_branch_coverage=1 00:29:52.727 --rc genhtml_function_coverage=1 00:29:52.727 --rc genhtml_legend=1 00:29:52.727 --rc geninfo_all_blocks=1 00:29:52.727 --rc geninfo_unexecuted_blocks=1 00:29:52.727 00:29:52.727 ' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:52.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.727 --rc genhtml_branch_coverage=1 00:29:52.727 --rc genhtml_function_coverage=1 00:29:52.727 --rc genhtml_legend=1 00:29:52.727 --rc geninfo_all_blocks=1 00:29:52.727 --rc geninfo_unexecuted_blocks=1 00:29:52.727 00:29:52.727 ' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:52.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.727 --rc genhtml_branch_coverage=1 00:29:52.727 --rc genhtml_function_coverage=1 00:29:52.727 --rc genhtml_legend=1 00:29:52.727 --rc geninfo_all_blocks=1 00:29:52.727 --rc geninfo_unexecuted_blocks=1 00:29:52.727 00:29:52.727 ' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:52.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.727 --rc genhtml_branch_coverage=1 00:29:52.727 --rc genhtml_function_coverage=1 00:29:52.727 --rc genhtml_legend=1 00:29:52.727 --rc geninfo_all_blocks=1 00:29:52.727 --rc geninfo_unexecuted_blocks=1 00:29:52.727 00:29:52.727 ' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.727 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:52.727 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:52.728 Cannot find device "nvmf_init_br" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:52.728 Cannot find device "nvmf_init_br2" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:52.728 Cannot find device "nvmf_tgt_br" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:52.728 Cannot find device "nvmf_tgt_br2" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:52.728 Cannot find device "nvmf_init_br" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:52.728 Cannot find device "nvmf_init_br2" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:52.728 Cannot find device "nvmf_tgt_br" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:52.728 Cannot find device "nvmf_tgt_br2" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:52.728 Cannot find device "nvmf_br" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:52.728 Cannot find device "nvmf_init_if" 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:29:52.728 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:52.986 Cannot find device "nvmf_init_if2" 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:52.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:52.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:52.986 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:52.987 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:52.987 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:29:52.987 00:29:52.987 --- 10.0.0.3 ping statistics --- 00:29:52.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.987 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:52.987 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:52.987 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:29:52.987 00:29:52.987 --- 10.0.0.4 ping statistics --- 00:29:52.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.987 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:52.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:29:52.987 00:29:52.987 --- 10.0.0.1 ping statistics --- 00:29:52.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.987 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:52.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:29:52.987 00:29:52.987 --- 10.0.0.2 ping statistics --- 00:29:52.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.987 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:52.987 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:53.247 ************************************ 00:29:53.247 START TEST nvmf_digest_clean 00:29:53.247 ************************************ 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:53.247 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=114922 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 114922 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 114922 ']' 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.248 17:50:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:53.248 [2024-11-20 17:50:34.857458] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:53.248 [2024-11-20 17:50:34.857567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.248 [2024-11-20 17:50:35.002273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.507 [2024-11-20 17:50:35.090426] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.507 [2024-11-20 17:50:35.090493] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.507 [2024-11-20 17:50:35.090506] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.507 [2024-11-20 17:50:35.090516] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.507 [2024-11-20 17:50:35.090526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.507 [2024-11-20 17:50:35.090558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.507 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:53.507 null0 00:29:53.507 [2024-11-20 17:50:35.291490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.766 [2024-11-20 17:50:35.315649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=114959 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 114959 /var/tmp/bperf.sock 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 114959 ']' 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.766 17:50:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:53.766 [2024-11-20 17:50:35.392515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:53.766 [2024-11-20 17:50:35.392643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114959 ] 00:29:53.766 [2024-11-20 17:50:35.539455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.025 [2024-11-20 17:50:35.640568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.960 17:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.960 17:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:54.960 17:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:54.960 17:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:54.960 17:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:55.219 17:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:55.219 17:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:55.785 nvme0n1 00:29:55.785 17:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:55.785 17:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:55.785 Running I/O for 2 seconds... 00:29:57.658 18231.00 IOPS, 71.21 MiB/s [2024-11-20T17:50:39.747Z] 18366.00 IOPS, 71.74 MiB/s 00:29:57.959 Latency(us) 00:29:57.959 [2024-11-20T17:50:39.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:57.959 nvme0n1 : 2.01 18380.75 71.80 0.00 0.00 6955.58 3902.37 14537.08 00:29:57.959 [2024-11-20T17:50:39.747Z] =================================================================================================================== 00:29:57.959 [2024-11-20T17:50:39.747Z] Total : 18380.75 71.80 0.00 0.00 6955.58 3902.37 14537.08 00:29:57.959 { 00:29:57.959 "results": [ 00:29:57.959 { 00:29:57.959 "job": "nvme0n1", 00:29:57.959 "core_mask": "0x2", 00:29:57.959 "workload": "randread", 00:29:57.959 "status": "finished", 00:29:57.959 "queue_depth": 128, 00:29:57.959 "io_size": 4096, 00:29:57.959 "runtime": 2.006882, 00:29:57.959 "iops": 18380.751832942842, 00:29:57.959 "mibps": 71.79981184743298, 00:29:57.959 "io_failed": 0, 00:29:57.959 "io_timeout": 0, 00:29:57.959 "avg_latency_us": 6955.578295380612, 00:29:57.959 "min_latency_us": 3902.370909090909, 00:29:57.959 "max_latency_us": 14537.076363636364 00:29:57.959 } 00:29:57.959 ], 00:29:57.959 "core_count": 1 00:29:57.959 } 00:29:57.959 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:57.959 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:57.959 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:57.959 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:57.959 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:57.959 | select(.opcode=="crc32c") 00:29:57.959 | "\(.module_name) \(.executed)"' 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 114959 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 114959 ']' 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 114959 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114959 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:58.240 killing process with pid 114959 00:29:58.240 Received shutdown signal, test time was about 2.000000 seconds 00:29:58.240 00:29:58.240 Latency(us) 00:29:58.240 [2024-11-20T17:50:40.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.240 [2024-11-20T17:50:40.028Z] =================================================================================================================== 00:29:58.240 [2024-11-20T17:50:40.028Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114959' 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 114959 00:29:58.240 17:50:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 114959 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=115049 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 115049 /var/tmp/bperf.sock 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 115049 ']' 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:58.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:58.499 17:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:58.499 [2024-11-20 17:50:40.171845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:58.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:58.499 Zero copy mechanism will not be used. 00:29:58.499 [2024-11-20 17:50:40.172500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115049 ] 00:29:58.758 [2024-11-20 17:50:40.316215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.758 [2024-11-20 17:50:40.421021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.693 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:59.693 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:59.693 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:59.693 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:59.693 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:59.952 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:59.952 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.211 nvme0n1 00:30:00.211 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:00.211 17:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:00.470 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:00.470 Zero copy mechanism will not be used. 00:30:00.470 Running I/O for 2 seconds... 00:30:02.345 6870.00 IOPS, 858.75 MiB/s [2024-11-20T17:50:44.133Z] 7097.00 IOPS, 887.12 MiB/s 00:30:02.345 Latency(us) 00:30:02.345 [2024-11-20T17:50:44.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.345 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:02.345 nvme0n1 : 2.00 7094.99 886.87 0.00 0.00 2251.18 673.98 10604.92 00:30:02.345 [2024-11-20T17:50:44.133Z] =================================================================================================================== 00:30:02.345 [2024-11-20T17:50:44.133Z] Total : 7094.99 886.87 0.00 0.00 2251.18 673.98 10604.92 00:30:02.345 { 00:30:02.345 "results": [ 00:30:02.345 { 00:30:02.345 "job": "nvme0n1", 00:30:02.345 "core_mask": "0x2", 00:30:02.345 "workload": "randread", 00:30:02.345 "status": "finished", 00:30:02.345 "queue_depth": 16, 00:30:02.345 "io_size": 131072, 00:30:02.345 "runtime": 2.003667, 00:30:02.345 "iops": 7094.991333390229, 00:30:02.345 "mibps": 886.8739166737786, 00:30:02.345 "io_failed": 0, 00:30:02.345 "io_timeout": 0, 00:30:02.345 "avg_latency_us": 2251.17588581368, 00:30:02.345 "min_latency_us": 673.9781818181818, 00:30:02.345 "max_latency_us": 10604.916363636363 00:30:02.345 } 00:30:02.345 ], 00:30:02.345 "core_count": 1 00:30:02.345 } 00:30:02.603 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:02.604 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:02.604 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:02.604 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:02.604 | select(.opcode=="crc32c") 00:30:02.604 | "\(.module_name) \(.executed)"' 00:30:02.604 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 115049 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 115049 ']' 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 115049 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115049 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:02.862 killing process with pid 115049 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115049' 00:30:02.862 Received shutdown signal, test time was about 2.000000 seconds 00:30:02.862 00:30:02.862 Latency(us) 00:30:02.862 [2024-11-20T17:50:44.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.862 [2024-11-20T17:50:44.650Z] =================================================================================================================== 00:30:02.862 [2024-11-20T17:50:44.650Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 115049 00:30:02.862 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 115049 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:03.120 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=115141 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 115141 /var/tmp/bperf.sock 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 115141 ']' 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:03.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:03.121 17:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:03.121 [2024-11-20 17:50:44.841154] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:03.121 [2024-11-20 17:50:44.841273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115141 ] 00:30:03.379 [2024-11-20 17:50:44.978544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.379 [2024-11-20 17:50:45.109412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.313 17:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.313 17:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:04.313 17:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:04.313 17:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:04.313 17:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:04.571 17:50:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.571 17:50:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.829 nvme0n1 00:30:05.088 17:50:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:05.088 17:50:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:05.088 Running I/O for 2 seconds... 00:30:06.958 19979.00 IOPS, 78.04 MiB/s [2024-11-20T17:50:48.746Z] 20672.00 IOPS, 80.75 MiB/s 00:30:06.958 Latency(us) 00:30:06.958 [2024-11-20T17:50:48.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.958 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.958 nvme0n1 : 2.00 20690.71 80.82 0.00 0.00 6180.26 2502.28 20256.58 00:30:06.958 [2024-11-20T17:50:48.746Z] =================================================================================================================== 00:30:06.958 [2024-11-20T17:50:48.746Z] Total : 20690.71 80.82 0.00 0.00 6180.26 2502.28 20256.58 00:30:06.958 { 00:30:06.958 "results": [ 00:30:06.958 { 00:30:06.958 "job": "nvme0n1", 00:30:06.958 "core_mask": "0x2", 00:30:06.958 "workload": "randwrite", 00:30:06.958 "status": "finished", 00:30:06.958 "queue_depth": 128, 00:30:06.958 "io_size": 4096, 00:30:06.958 "runtime": 2.004378, 00:30:06.958 "iops": 20690.70804010022, 00:30:06.958 "mibps": 80.82307828164149, 00:30:06.958 "io_failed": 0, 00:30:06.958 "io_timeout": 0, 00:30:06.958 "avg_latency_us": 6180.258765432099, 00:30:06.958 "min_latency_us": 2502.2836363636366, 00:30:06.958 "max_latency_us": 20256.581818181818 00:30:06.958 } 00:30:06.958 ], 00:30:06.958 "core_count": 1 00:30:06.958 } 00:30:07.270 17:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:07.270 17:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:07.270 17:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:07.270 17:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:07.270 | select(.opcode=="crc32c") 00:30:07.270 | "\(.module_name) \(.executed)"' 00:30:07.270 17:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 115141 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 115141 ']' 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 115141 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:07.270 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115141 00:30:07.533 killing process with pid 115141 00:30:07.533 Received shutdown signal, test time was about 2.000000 seconds 00:30:07.533 00:30:07.533 Latency(us) 00:30:07.533 [2024-11-20T17:50:49.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.533 [2024-11-20T17:50:49.321Z] =================================================================================================================== 00:30:07.533 [2024-11-20T17:50:49.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:07.533 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:07.533 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:07.533 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115141' 00:30:07.533 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 115141 00:30:07.533 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 115141 00:30:07.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=115235 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 115235 /var/tmp/bperf.sock 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 115235 ']' 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.816 17:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:07.816 Zero copy mechanism will not be used. 00:30:07.816 [2024-11-20 17:50:49.412254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:07.816 [2024-11-20 17:50:49.412356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115235 ] 00:30:07.816 [2024-11-20 17:50:49.553120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.075 [2024-11-20 17:50:49.674318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.009 17:50:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.009 17:50:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:09.009 17:50:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:09.009 17:50:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:09.009 17:50:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:09.267 17:50:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.267 17:50:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.525 nvme0n1 00:30:09.525 17:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:09.525 17:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:09.525 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:09.525 Zero copy mechanism will not be used. 00:30:09.525 Running I/O for 2 seconds... 00:30:11.834 5439.00 IOPS, 679.88 MiB/s [2024-11-20T17:50:53.622Z] 5552.50 IOPS, 694.06 MiB/s 00:30:11.834 Latency(us) 00:30:11.834 [2024-11-20T17:50:53.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.834 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:11.834 nvme0n1 : 2.00 5551.92 693.99 0.00 0.00 2875.75 1779.90 5242.88 00:30:11.834 [2024-11-20T17:50:53.622Z] =================================================================================================================== 00:30:11.834 [2024-11-20T17:50:53.622Z] Total : 5551.92 693.99 0.00 0.00 2875.75 1779.90 5242.88 00:30:11.834 { 00:30:11.834 "results": [ 00:30:11.834 { 00:30:11.834 "job": "nvme0n1", 00:30:11.834 "core_mask": "0x2", 00:30:11.834 "workload": "randwrite", 00:30:11.834 "status": "finished", 00:30:11.834 "queue_depth": 16, 00:30:11.834 "io_size": 131072, 00:30:11.834 "runtime": 2.004172, 00:30:11.834 "iops": 5551.918697596813, 00:30:11.834 "mibps": 693.9898371996017, 00:30:11.834 "io_failed": 0, 00:30:11.834 "io_timeout": 0, 00:30:11.834 "avg_latency_us": 2875.7459244916136, 00:30:11.834 "min_latency_us": 1779.898181818182, 00:30:11.834 "max_latency_us": 5242.88 00:30:11.834 } 00:30:11.834 ], 00:30:11.834 "core_count": 1 00:30:11.834 } 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:11.834 | select(.opcode=="crc32c") 00:30:11.834 | "\(.module_name) \(.executed)"' 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 115235 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 115235 ']' 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 115235 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:11.834 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115235 00:30:12.093 killing process with pid 115235 00:30:12.093 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.093 00:30:12.093 Latency(us) 00:30:12.093 [2024-11-20T17:50:53.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.093 [2024-11-20T17:50:53.881Z] =================================================================================================================== 00:30:12.093 [2024-11-20T17:50:53.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.093 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:12.093 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:12.093 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115235' 00:30:12.093 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 115235 00:30:12.093 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 115235 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 114922 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 114922 ']' 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 114922 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114922 00:30:12.351 killing process with pid 114922 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114922' 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 114922 00:30:12.351 17:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 114922 00:30:12.609 ************************************ 00:30:12.609 END TEST nvmf_digest_clean 00:30:12.609 ************************************ 00:30:12.609 00:30:12.609 real 0m19.439s 00:30:12.609 user 0m37.977s 00:30:12.609 sys 0m5.140s 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:12.609 ************************************ 00:30:12.609 START TEST nvmf_digest_error 00:30:12.609 ************************************ 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=115345 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 115345 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 115345 ']' 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.609 17:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:12.609 [2024-11-20 17:50:54.386475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:12.609 [2024-11-20 17:50:54.386633] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.868 [2024-11-20 17:50:54.532364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.868 [2024-11-20 17:50:54.635557] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.868 [2024-11-20 17:50:54.635645] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.868 [2024-11-20 17:50:54.635672] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.868 [2024-11-20 17:50:54.635680] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.868 [2024-11-20 17:50:54.635687] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.868 [2024-11-20 17:50:54.635714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.803 [2024-11-20 17:50:55.392356] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.803 null0 00:30:13.803 [2024-11-20 17:50:55.521206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.803 [2024-11-20 17:50:55.545339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=115395 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 115395 /var/tmp/bperf.sock 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 115395 ']' 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:13.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:13.803 17:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:14.061 [2024-11-20 17:50:55.615164] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:14.061 [2024-11-20 17:50:55.615310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115395 ] 00:30:14.061 [2024-11-20 17:50:55.756934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.320 [2024-11-20 17:50:55.857602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.888 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:14.888 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:14.888 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:14.888 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:15.147 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:15.147 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.147 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:15.147 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.147 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.147 17:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.713 nvme0n1 00:30:15.713 17:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:15.713 17:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.713 17:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:15.713 17:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.713 17:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:15.713 17:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:15.713 Running I/O for 2 seconds... 00:30:15.713 [2024-11-20 17:50:57.490126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.713 [2024-11-20 17:50:57.490191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.713 [2024-11-20 17:50:57.490207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.504319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.504359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.504374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.518838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.518879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.518911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.533821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.533862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.533882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.547533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.547574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.547588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.560366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.560406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.560420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.573341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.573381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.573394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.585368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.585423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.585436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.599860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.971 [2024-11-20 17:50:57.599900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.971 [2024-11-20 17:50:57.599914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.971 [2024-11-20 17:50:57.614252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.614292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.614305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.629719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.629776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.629790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.643264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.643304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.643318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.657513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.657568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.657582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.672069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.672138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.672152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.686054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.686120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.686134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.700303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.700343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.700356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.714751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.714806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.714820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.728828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.728879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.728893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.741653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.741693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.741707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:15.972 [2024-11-20 17:50:57.755889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:15.972 [2024-11-20 17:50:57.755929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.972 [2024-11-20 17:50:57.755942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.770515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.770571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.770584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.783935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.783974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.783988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.798734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.798774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.798788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.812617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.812657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.812672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.826636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.826682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.826696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.839331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.839371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.839385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.853321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.853361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.853374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.868459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.868499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.868512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.881946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.230 [2024-11-20 17:50:57.881986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.230 [2024-11-20 17:50:57.881999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.230 [2024-11-20 17:50:57.896182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.896221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.896235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:57.910796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.910837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.910851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:57.923794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.923834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.923847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:57.939373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.939432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.939462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:57.953191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.953229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.953243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:57.967289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.967327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.967341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:57.980714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.980752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.980782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:57.994051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:57.994114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:57.994144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.231 [2024-11-20 17:50:58.008351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.231 [2024-11-20 17:50:58.008389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.231 [2024-11-20 17:50:58.008403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.022527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.022565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.022605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.036383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.036421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.036435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.050834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.050873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.050894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.064635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.064672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.064701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.078752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.078790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.078834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.093311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.093348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.093377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.105604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.105640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.105670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.120316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.120355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.120369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.135284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.135325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.135354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.150121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.150170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.150184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.164116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.164167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.164181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.178852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.178899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.178918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.193517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.490 [2024-11-20 17:50:58.193565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.490 [2024-11-20 17:50:58.193580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.490 [2024-11-20 17:50:58.207651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.491 [2024-11-20 17:50:58.207695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.491 [2024-11-20 17:50:58.207709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.491 [2024-11-20 17:50:58.219634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.491 [2024-11-20 17:50:58.219675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.491 [2024-11-20 17:50:58.219689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.491 [2024-11-20 17:50:58.235603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.491 [2024-11-20 17:50:58.235647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.491 [2024-11-20 17:50:58.235661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.491 [2024-11-20 17:50:58.250524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.491 [2024-11-20 17:50:58.250564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.491 [2024-11-20 17:50:58.250594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.491 [2024-11-20 17:50:58.264523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.491 [2024-11-20 17:50:58.264564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.491 [2024-11-20 17:50:58.264594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.278305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.278346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.278360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.292158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.292198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.292212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.305940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.305980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.306010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.320775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.320815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.320829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.334859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.334907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.347554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.347599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.347612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.362488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.362528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.362558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.375735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.375774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.375803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.389912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.389955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.389969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.403851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.403907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.403924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.418706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.418746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.418759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.433475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.433514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.433545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.448152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.448191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.448205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.460825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.460864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.460877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 17900.00 IOPS, 69.92 MiB/s [2024-11-20T17:50:58.552Z] [2024-11-20 17:50:58.476628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.476669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.476682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.490058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.490141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.490156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.506323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.506363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.506376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.520873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.520913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.764 [2024-11-20 17:50:58.520927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.764 [2024-11-20 17:50:58.533797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.764 [2024-11-20 17:50:58.533841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.765 [2024-11-20 17:50:58.533854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.765 [2024-11-20 17:50:58.547974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:16.765 [2024-11-20 17:50:58.548013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.765 [2024-11-20 17:50:58.548027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.562684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.562741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.562758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.577408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.577449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.577462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.592286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.592386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.592421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.608651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.608705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.608723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.625151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.625211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.625228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.641675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.641740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.641758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.657016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.657092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.657119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.673534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.023 [2024-11-20 17:50:58.673669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.023 [2024-11-20 17:50:58.673693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.023 [2024-11-20 17:50:58.689969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.690083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.690105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.024 [2024-11-20 17:50:58.703669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.703729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.703746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.024 [2024-11-20 17:50:58.720539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.720588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.720606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.024 [2024-11-20 17:50:58.735252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.735302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.735330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.024 [2024-11-20 17:50:58.750473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.750525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.750542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.024 [2024-11-20 17:50:58.767701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.767779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.767808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.024 [2024-11-20 17:50:58.782193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.782245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.782262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.024 [2024-11-20 17:50:58.797439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.024 [2024-11-20 17:50:58.797494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.024 [2024-11-20 17:50:58.797512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.282 [2024-11-20 17:50:58.813004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.282 [2024-11-20 17:50:58.813055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.282 [2024-11-20 17:50:58.813092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.282 [2024-11-20 17:50:58.828568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.282 [2024-11-20 17:50:58.828621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.282 [2024-11-20 17:50:58.828635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.282 [2024-11-20 17:50:58.842556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.282 [2024-11-20 17:50:58.842610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.282 [2024-11-20 17:50:58.842634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.282 [2024-11-20 17:50:58.856612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.282 [2024-11-20 17:50:58.856654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.282 [2024-11-20 17:50:58.856668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.282 [2024-11-20 17:50:58.869441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.282 [2024-11-20 17:50:58.869482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.869496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.884434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.884487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.884531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.900047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.900113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.900129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.915120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.915161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.915175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.929919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.929962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.929977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.945001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.945132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.945148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.957426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.957502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.957517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.972568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.972637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.972652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:58.986230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:58.986271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:58.986285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:59.001293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:59.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:59.001346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:59.013947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:59.013988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:59.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:59.029871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:59.029912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:59.029926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:59.044704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:59.044745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:59.044759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.283 [2024-11-20 17:50:59.057691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.283 [2024-11-20 17:50:59.057760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.283 [2024-11-20 17:50:59.057776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.071884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.071947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.071964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.086788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.086835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.086851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.101604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.101659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.101674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.116398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.116479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.116504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.131039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.131104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.131120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.144132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.144173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.144187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.158234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.158278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.158293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.170925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.170970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.542 [2024-11-20 17:50:59.170984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.542 [2024-11-20 17:50:59.186422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.542 [2024-11-20 17:50:59.186490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.186504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.201433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.201522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.201548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.216253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.216311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.216327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.230544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.230587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.230601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.245349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.245391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.245406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.258513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.258557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.258571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.272941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.273031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.273047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.287035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.287132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.287147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.302376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.302434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.302449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.543 [2024-11-20 17:50:59.315582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.543 [2024-11-20 17:50:59.315636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.543 [2024-11-20 17:50:59.315656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.330377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.330425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.330440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.345630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.345695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.345709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.361172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.361219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.361234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.374988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.375032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.375047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.389406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.389452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.389465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.404077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.404155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.404171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.417177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.417257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.417280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.433298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.433341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.433355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.448110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.448164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.448178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 [2024-11-20 17:50:59.463244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6075f0) 00:30:17.801 [2024-11-20 17:50:59.463287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.801 [2024-11-20 17:50:59.463301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.801 17616.00 IOPS, 68.81 MiB/s 00:30:17.801 Latency(us) 00:30:17.801 [2024-11-20T17:50:59.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.801 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:17.801 nvme0n1 : 2.00 17640.44 68.91 0.00 0.00 7248.19 3693.85 19899.11 00:30:17.801 [2024-11-20T17:50:59.589Z] =================================================================================================================== 00:30:17.801 [2024-11-20T17:50:59.589Z] Total : 17640.44 68.91 0.00 0.00 7248.19 3693.85 19899.11 00:30:17.801 { 00:30:17.801 "results": [ 00:30:17.801 { 00:30:17.801 "job": "nvme0n1", 00:30:17.802 "core_mask": "0x2", 00:30:17.802 "workload": "randread", 00:30:17.802 "status": "finished", 00:30:17.802 "queue_depth": 128, 00:30:17.802 "io_size": 4096, 00:30:17.802 "runtime": 2.004485, 00:30:17.802 "iops": 17640.441310361515, 00:30:17.802 "mibps": 68.90797386859967, 00:30:17.802 "io_failed": 0, 00:30:17.802 "io_timeout": 0, 00:30:17.802 "avg_latency_us": 7248.18560263266, 00:30:17.802 "min_latency_us": 3693.847272727273, 00:30:17.802 "max_latency_us": 19899.112727272728 00:30:17.802 } 00:30:17.802 ], 00:30:17.802 "core_count": 1 00:30:17.802 } 00:30:17.802 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:17.802 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:17.802 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:17.802 | .driver_specific 00:30:17.802 | .nvme_error 00:30:17.802 | .status_code 00:30:17.802 | .command_transient_transport_error' 00:30:17.802 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:18.059 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:30:18.059 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 115395 00:30:18.059 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 115395 ']' 00:30:18.059 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 115395 00:30:18.059 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:18.059 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:18.059 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115395 00:30:18.317 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:18.317 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:18.317 killing process with pid 115395 00:30:18.317 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115395' 00:30:18.317 Received shutdown signal, test time was about 2.000000 seconds 00:30:18.317 00:30:18.317 Latency(us) 00:30:18.317 [2024-11-20T17:51:00.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.317 [2024-11-20T17:51:00.105Z] =================================================================================================================== 00:30:18.317 [2024-11-20T17:51:00.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.317 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 115395 00:30:18.317 17:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 115395 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=115484 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 115484 /var/tmp/bperf.sock 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 115484 ']' 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:18.575 17:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:18.575 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:18.575 Zero copy mechanism will not be used. 00:30:18.575 [2024-11-20 17:51:00.212526] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:18.575 [2024-11-20 17:51:00.212630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115484 ] 00:30:18.575 [2024-11-20 17:51:00.348214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.834 [2024-11-20 17:51:00.480121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:19.770 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.337 nvme0n1 00:30:20.338 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:20.338 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.338 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.338 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.338 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:20.338 17:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:20.338 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:20.338 Zero copy mechanism will not be used. 00:30:20.338 Running I/O for 2 seconds... 00:30:20.338 [2024-11-20 17:51:02.048552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.048635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.048649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.053756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.053801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.053814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.059643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.059702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.059724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.062751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.062787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.062801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.067977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.068022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.068039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.077252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.077353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.077384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.085016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.085110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.085131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.090698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.090941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.090966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.096374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.096427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.096445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.102326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.102378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.102395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.108127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.108188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.108216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.113952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.114004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.114022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.338 [2024-11-20 17:51:02.119851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.338 [2024-11-20 17:51:02.119903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.338 [2024-11-20 17:51:02.119923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.125518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.125570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.125587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.131281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.131332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.131349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.137383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.137582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.137605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.141689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.141878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.141900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.146965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.147016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.147035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.153099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.153150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.153168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.159246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.159298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.159316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.164841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.164893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.164911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.168704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.168758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.168783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.175018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.175085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.175104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.181122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.181172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.181199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.187383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.187433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.187454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.191616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.191665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.191682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.196946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.196996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.197015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.203152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.203202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.203227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.209055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.209119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.209150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.214720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.214774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.214792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.220325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.220551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.220683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.226455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.226677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.226845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.232506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.232727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.232884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.238863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.239108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.239281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.245024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.245258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.245518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.251251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.251441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.251466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.257142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.257207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.257234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.262985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.263202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.263226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.269268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.599 [2024-11-20 17:51:02.269485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.599 [2024-11-20 17:51:02.269640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.599 [2024-11-20 17:51:02.275363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.275605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.275799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.281813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.282086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.282250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.286085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.286275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.286508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.291485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.291682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.291815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.297099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.297292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.297441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.302438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.302625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.302761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.305760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.305938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.306059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.310514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.310560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.310574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.314288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.314333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.314347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.319319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.319361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.319376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.323664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.323706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.323721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.327034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.327099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.327113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.331737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.331781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.331795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.337057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.337145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.337160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.341004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.341047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.341061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.344678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.344737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.344751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.349044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.349102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.349125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.354244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.354286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.354316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.357698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.357739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.357770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.362280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.362323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.362353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.366742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.366785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.366816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.370248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.370290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.370304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.374259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.374301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.374315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.378721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.378764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.378793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.600 [2024-11-20 17:51:02.382215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.600 [2024-11-20 17:51:02.382256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.600 [2024-11-20 17:51:02.382285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.386935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.386978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.386993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.390743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.390927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.390945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.395493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.395670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.395693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.399475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.399650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.399668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.402983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.403019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.407368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.407411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.407425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.411520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.411563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.411579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.415847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.415891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.860 [2024-11-20 17:51:02.415904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.860 [2024-11-20 17:51:02.419769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.860 [2024-11-20 17:51:02.419813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.419828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.423651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.423819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.423838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.427449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.427493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.427507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.432320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.432367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.432382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.435710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.435755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.435770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.440612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.440662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.440677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.444080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.444130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.444152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.448566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.448632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.448657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.452638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.452684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.452698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.457082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.457167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.457184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.460946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.460989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.461021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.465348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.465396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.465418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.469911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.469963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.469978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.473566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.473611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.473625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.477825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.477869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.477883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.483325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.483373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.483388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.489222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.489289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.489312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.495901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.495961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.495983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.502310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.502367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.502388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.508773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.508831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.508858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.515203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.515259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.515280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.521542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.521600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.521621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.528261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.528322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.528345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.535041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.535116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.535139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.541552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.541607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.541643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.548198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.548246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.548261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.552703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.552746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.552776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.557061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.557141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.557159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.560999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.561042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.561073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.564762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.564803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.564833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.569187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.569229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.569243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.572486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.572528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.572542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.577488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.577531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.577545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.581729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.581772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.581787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.585852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.585894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.585908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.589388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.589431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.589445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.593222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.593265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.593279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.597153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.861 [2024-11-20 17:51:02.597192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.861 [2024-11-20 17:51:02.597206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.861 [2024-11-20 17:51:02.600978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.601020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.601049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.604860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.604903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.604917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.608162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.608201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.608246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.612252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.612292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.612321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.616639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.616697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.620165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.620207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.620224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.624576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.624620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.624634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.629227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.629275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.629289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.633938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.634154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.634173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.636947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.636988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.637002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.862 [2024-11-20 17:51:02.641964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:20.862 [2024-11-20 17:51:02.642007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.862 [2024-11-20 17:51:02.642021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.647424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.647470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.647485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.651051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.651110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.651125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.655503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.655676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.655694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.660565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.660612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.660627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.665035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.665105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.665131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.670413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.670459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.670485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.674038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.674092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.674109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.678016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.678059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.678100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.682609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.682683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.682713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.685851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.685893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.685907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.689775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.689818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.123 [2024-11-20 17:51:02.689847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.123 [2024-11-20 17:51:02.694272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.123 [2024-11-20 17:51:02.694314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.694328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.698740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.698781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.698812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.703391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.703457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.703487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.707139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.707184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.707198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.711259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.711298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.711329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.716268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.716313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.716328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.721186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.721230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.721246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.724100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.724167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.724182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.729223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.729403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.729423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.732907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.732951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.732965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.737281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.737324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.737338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.742037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.742223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.742241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.747586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.747644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.747658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.752756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.752803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.752818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.757645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.757691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.757705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.761883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.762079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.766574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.766620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.766634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.770145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.770186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.770200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.774555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.774763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.774781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.778650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.778694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.778724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.782380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.782576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.782723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.787051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.787248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.787417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.791940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.792150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.792344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.124 [2024-11-20 17:51:02.797264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.124 [2024-11-20 17:51:02.797443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.124 [2024-11-20 17:51:02.797573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.800531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.800722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.800845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.805206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.805386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.805518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.809824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.810004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.810212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.814345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.814527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.814550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.818376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.818420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.818435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.822375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.822419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.822433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.825750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.825794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.825808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.830169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.830341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.830362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.834577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.834622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.834635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.838261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.838304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.838318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.842763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.842806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.842821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.846004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.846046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.846060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.850184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.850227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.850241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.854508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.854552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.854567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.858107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.858147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.858161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.862353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.862404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.862419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.866469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.866510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.866524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.871207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.871268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.871283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.874535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.874577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.874590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.878715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.878758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.878772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.882783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.882954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.882973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.886690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.886733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.886747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.890567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.890609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.890623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.893708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.893749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.893763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.897863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.897916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.897931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.903008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.903051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.125 [2024-11-20 17:51:02.903079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.125 [2024-11-20 17:51:02.908030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.125 [2024-11-20 17:51:02.908084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.126 [2024-11-20 17:51:02.908100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.911249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.911292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.911322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.916426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.916472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.916487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.919914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.919957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.919972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.924352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.924398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.924413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.929770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.929830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.929856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.936709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.936931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.936960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.944162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.944225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.944250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.951389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.951452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.951477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.956330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.386 [2024-11-20 17:51:02.956375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.386 [2024-11-20 17:51:02.956389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.386 [2024-11-20 17:51:02.960866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.960910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.960925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.963879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.963920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.963935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.968488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.968531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.968546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.973981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.974211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.974238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.980707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.980767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.980790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.987262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.987308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.987323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.991604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.991769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.991788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.995279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.995322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.995338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:02.999405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:02.999451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:02.999466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.004465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.004513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.004527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.009774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.009826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.009841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.013466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.013511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.013526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.017883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.018075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.018096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.022715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.022761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.022775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.026064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.026240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.026259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.030299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.030344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.030358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.034583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.034626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.034640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.387 6486.00 IOPS, 810.75 MiB/s [2024-11-20T17:51:03.175Z] [2024-11-20 17:51:03.041463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.041510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.041526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.044977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.045023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.045037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.049113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.049156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.049170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.054113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.054151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.054165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.058569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.058613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.058629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.061746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.061928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.062061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.066376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.066421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.066436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.069779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.069822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.069836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.074170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.074214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.077857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.387 [2024-11-20 17:51:03.077900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.387 [2024-11-20 17:51:03.077914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.387 [2024-11-20 17:51:03.081798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.081840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.081870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.086039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.086094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.086110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.088942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.088987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.089001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.092945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.092987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.093001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.096539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.096581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.096596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.100719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.100763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.100777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.104297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.104341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.104354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.107792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.107958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.107976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.111283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.111326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.111340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.115531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.115574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.115588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.120272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.120315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.120329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.123681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.123724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.123739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.127913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.127957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.127971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.133005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.133050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.133078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.137919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.138104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.138123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.140851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.140894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.140908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.145507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.145550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.145565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.149120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.149164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.149179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.153310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.153353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.153367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.156966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.157009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.157023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.160836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.160879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.160894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.164686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.164856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.164873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.388 [2024-11-20 17:51:03.168996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.388 [2024-11-20 17:51:03.169168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.388 [2024-11-20 17:51:03.169186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.173344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.173390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.648 [2024-11-20 17:51:03.173405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.176730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.648 [2024-11-20 17:51:03.176789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.181243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.181288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.648 [2024-11-20 17:51:03.181303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.184801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.184845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.648 [2024-11-20 17:51:03.184859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.188471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.188514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.648 [2024-11-20 17:51:03.188529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.192872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.192916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.648 [2024-11-20 17:51:03.192930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.196444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.196490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.648 [2024-11-20 17:51:03.196505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.648 [2024-11-20 17:51:03.201261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.648 [2024-11-20 17:51:03.201306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.201321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.205845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.205905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.205924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.209701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.209759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.209776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.216234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.216293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.216317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.221671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.221715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.221730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.225834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.225877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.225891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.230401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.230445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.230460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.235393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.235442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.235457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.239905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.239957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.239972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.243052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.243106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.243120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.247572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.247748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.247767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.251250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.251285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.251299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.255680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.255840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.255858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.260584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.260630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.260644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.264744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.264790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.264804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.268341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.268502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.268520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.273453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.273526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.273554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.277788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.277831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.277860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.281539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.281715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.281732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.285494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.285536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.285567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.289521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.289565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.289579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.292638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.292694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.292725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.297419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.297475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.297490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.302399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.302445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.302459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.305992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.306217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.306237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.310573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.310741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.310771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.314758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.314938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.649 [2024-11-20 17:51:03.314956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.649 [2024-11-20 17:51:03.318190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.649 [2024-11-20 17:51:03.318232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.318246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.322500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.322661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.322679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.326792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.326968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.326987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.330878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.330955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.330969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.334672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.334714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.334728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.339329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.339372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.339386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.343181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.343222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.343236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.346844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.346897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.346914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.350612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.350670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.350685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.354957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.355013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.358299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.358341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.358355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.362284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.362325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.362339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.366043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.366111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.366126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.370060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.370127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.370141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.374284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.374326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.374356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.377970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.378010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.378040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.382030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.382081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.382097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.386162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.386204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.386218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.390437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.390599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.390617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.394359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.394400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.394414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.397356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.397397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.397411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.401440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.401486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.401500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.405917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.405962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.405977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.409421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.409464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.409493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.413520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.413562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.413592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.418550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.418592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.418621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.423152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.423194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.650 [2024-11-20 17:51:03.423209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.650 [2024-11-20 17:51:03.426291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.650 [2024-11-20 17:51:03.426331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.651 [2024-11-20 17:51:03.426345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.651 [2024-11-20 17:51:03.430461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.651 [2024-11-20 17:51:03.430504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.651 [2024-11-20 17:51:03.430519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.911 [2024-11-20 17:51:03.434813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.911 [2024-11-20 17:51:03.434874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.911 [2024-11-20 17:51:03.434907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.911 [2024-11-20 17:51:03.438800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.911 [2024-11-20 17:51:03.438842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.911 [2024-11-20 17:51:03.438872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.911 [2024-11-20 17:51:03.442696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.442740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.442755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.446938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.446982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.446995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.451025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.451077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.451092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.454564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.454606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.454620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.458810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.459007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.459026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.462827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.462872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.462899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.467041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.467106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.467121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.471786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.471951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.471971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.475260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.475305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.475319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.479770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.479837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.479856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.484609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.484791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.484809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.489174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.489209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.489222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.492360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.492400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.492413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.496317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.496356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.496370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.500238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.500275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.500289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.504159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.504194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.504207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.507695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.507734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.507747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.511783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.511820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.511832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.515855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.515897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.515915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.520191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.520230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.520244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.523400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.523439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.523452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.527501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.527573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.527587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.531830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.531871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.531884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.535338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.535376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.535389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.539853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.539895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.539908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.544143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.544197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.544210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.912 [2024-11-20 17:51:03.547904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.912 [2024-11-20 17:51:03.547942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.912 [2024-11-20 17:51:03.547956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.552060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.552140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.552155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.556635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.556678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.556692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.560678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.560732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.560746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.564642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.564681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.564694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.569179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.569222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.569236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.572680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.572732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.572744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.577039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.577105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.577119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.582408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.582465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.582478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.586135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.586203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.586216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.590831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.590896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.590935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.596071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.596124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.596139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.601242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.601288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.601302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.606148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.606194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.609543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.609586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.609600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.614632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.614678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.614691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.618353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.618393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.618406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.621690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.621742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.621755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.625896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.625937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.625950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.630171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.630209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.630222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.633843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.633896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.633909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.638033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.638103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.638117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.642753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.642791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.642803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.646219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.646257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.646270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.650535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.650587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.650600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.653890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.653927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.653940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.658070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.658156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.913 [2024-11-20 17:51:03.662794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.913 [2024-11-20 17:51:03.662833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.913 [2024-11-20 17:51:03.662846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.666402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.666441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.666454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.670615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.670686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.670699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.674407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.674445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.674458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.678988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.679028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.679042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.682309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.682345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.682357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.686071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.686137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.686150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.690348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.690385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.690397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.914 [2024-11-20 17:51:03.694675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:21.914 [2024-11-20 17:51:03.694714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.914 [2024-11-20 17:51:03.694728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.699038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.699096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.699111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.702965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.703004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.703016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.706691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.706730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.706743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.710912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.710961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.710974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.714300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.714336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.714349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.718345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.718395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.721815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.721851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.721864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.726053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.726101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.726114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.730343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.730385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.730398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.733730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.733782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.733796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.737403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.737439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.737452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.741552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.741588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.174 [2024-11-20 17:51:03.741601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.174 [2024-11-20 17:51:03.744596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.174 [2024-11-20 17:51:03.744648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.744661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.748871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.748908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.748921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.752761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.752797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.752810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.757037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.757084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.757097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.760496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.760534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.760547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.764609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.764660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.764672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.768527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.768566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.768579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.772320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.772358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.772371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.776597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.776650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.776664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.780622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.780706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.784694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.784731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.784745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.788529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.788584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.788597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.792346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.792383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.792396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.796336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.796386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.796398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.800580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.800631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.800658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.803951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.803993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.804006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.808245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.808296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.808309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.812518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.812556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.812569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.815816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.815853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.815866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.820410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.820450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.820464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.824831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.824904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.824918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.828857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.828896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.828910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.833279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.833330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.833343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.838503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.838554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.838566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.843766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.843805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.843818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.846813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.846860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.846873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.851716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.851768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.175 [2024-11-20 17:51:03.851780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.175 [2024-11-20 17:51:03.856543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.175 [2024-11-20 17:51:03.856609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.856636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.861549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.861602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.865148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.865211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.865224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.869301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.869352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.869365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.873925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.873976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.873989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.878425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.878476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.878504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.881903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.881968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.881980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.886096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.886158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.886172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.890412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.890448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.890461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.894125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.894161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.894174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.898098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.898135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.898147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.901562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.901600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.901613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.905271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.905307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.905321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.909422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.909459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.909472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.913486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.913524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.913537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.917013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.917050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.917076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.921133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.921169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.921182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.925057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.925105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.925118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.929019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.929061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.929087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.932663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.932702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.932715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.936622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.936660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.936673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.940674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.940719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.940732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.944162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.944198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.948378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.948414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.948427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.951808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.951845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.951858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.176 [2024-11-20 17:51:03.956993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.176 [2024-11-20 17:51:03.957033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.176 [2024-11-20 17:51:03.957047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.962079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.962117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.962130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.966303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.966342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.966355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.969101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.969138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.969150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.974332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.974372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.974385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.978666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.978705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.978722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.982846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.982894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.982909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.986193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.986231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.986244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.990476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.990516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.990529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.994575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.994616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.994630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:03.998547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:03.998588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:03.998602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:04.002921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:04.002965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:04.002980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:04.007039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.446 [2024-11-20 17:51:04.007097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.446 [2024-11-20 17:51:04.007112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.446 [2024-11-20 17:51:04.010819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.447 [2024-11-20 17:51:04.010859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.447 [2024-11-20 17:51:04.010873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.447 [2024-11-20 17:51:04.015034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.447 [2024-11-20 17:51:04.015088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.447 [2024-11-20 17:51:04.015103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.447 [2024-11-20 17:51:04.018432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.447 [2024-11-20 17:51:04.018470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.447 [2024-11-20 17:51:04.018483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.447 [2024-11-20 17:51:04.021988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.447 [2024-11-20 17:51:04.022038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.447 [2024-11-20 17:51:04.022051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.447 [2024-11-20 17:51:04.026439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.447 [2024-11-20 17:51:04.026487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.447 [2024-11-20 17:51:04.026500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.447 [2024-11-20 17:51:04.030933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.447 [2024-11-20 17:51:04.031000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.447 [2024-11-20 17:51:04.031016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.447 [2024-11-20 17:51:04.035670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ac3930) 00:30:22.447 [2024-11-20 17:51:04.035712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.447 [2024-11-20 17:51:04.035726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.447 7023.50 IOPS, 877.94 MiB/s 00:30:22.447 Latency(us) 00:30:22.447 [2024-11-20T17:51:04.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.447 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:22.447 nvme0n1 : 2.00 7020.43 877.55 0.00 0.00 2274.77 603.23 10187.87 00:30:22.447 [2024-11-20T17:51:04.235Z] =================================================================================================================== 00:30:22.447 [2024-11-20T17:51:04.235Z] Total : 7020.43 877.55 0.00 0.00 2274.77 603.23 10187.87 00:30:22.447 { 00:30:22.447 "results": [ 00:30:22.447 { 00:30:22.447 "job": "nvme0n1", 00:30:22.447 "core_mask": "0x2", 00:30:22.447 "workload": "randread", 00:30:22.447 "status": "finished", 00:30:22.447 "queue_depth": 16, 00:30:22.447 "io_size": 131072, 00:30:22.447 "runtime": 2.003153, 00:30:22.447 "iops": 7020.432288497184, 00:30:22.447 "mibps": 877.554036062148, 00:30:22.447 "io_failed": 0, 00:30:22.447 "io_timeout": 0, 00:30:22.447 "avg_latency_us": 2274.770063545215, 00:30:22.447 "min_latency_us": 603.2290909090909, 00:30:22.447 "max_latency_us": 10187.869090909091 00:30:22.447 } 00:30:22.447 ], 00:30:22.447 "core_count": 1 00:30:22.447 } 00:30:22.447 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:22.447 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:22.447 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:22.447 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:22.447 | .driver_specific 00:30:22.447 | .nvme_error 00:30:22.447 | .status_code 00:30:22.447 | .command_transient_transport_error' 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 453 > 0 )) 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 115484 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 115484 ']' 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 115484 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115484 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:22.705 killing process with pid 115484 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115484' 00:30:22.705 Received shutdown signal, test time was about 2.000000 seconds 00:30:22.705 00:30:22.705 Latency(us) 00:30:22.705 [2024-11-20T17:51:04.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.705 [2024-11-20T17:51:04.493Z] =================================================================================================================== 00:30:22.705 [2024-11-20T17:51:04.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 115484 00:30:22.705 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 115484 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=115570 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 115570 /var/tmp/bperf.sock 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 115570 ']' 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:22.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:22.965 17:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:22.965 [2024-11-20 17:51:04.701660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:22.965 [2024-11-20 17:51:04.701750] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115570 ] 00:30:23.223 [2024-11-20 17:51:04.843682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.223 [2024-11-20 17:51:04.939490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.159 17:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:24.159 17:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:24.159 17:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.159 17:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.417 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:24.417 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.417 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.417 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.417 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.417 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.985 nvme0n1 00:30:24.985 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:24.985 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.985 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.985 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.985 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:24.985 17:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.985 Running I/O for 2 seconds... 00:30:24.985 [2024-11-20 17:51:06.669686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f6458 00:30:24.985 [2024-11-20 17:51:06.670858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.670919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.684412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5658 00:30:24.985 [2024-11-20 17:51:06.686192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.686251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.693337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e0a68 00:30:24.985 [2024-11-20 17:51:06.694174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.694215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.708536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f1868 00:30:24.985 [2024-11-20 17:51:06.709979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.710027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.720182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fc998 00:30:24.985 [2024-11-20 17:51:06.721337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.721400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.732670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f4298 00:30:24.985 [2024-11-20 17:51:06.733859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.733891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.747011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e3498 00:30:24.985 [2024-11-20 17:51:06.748845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.748875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.755694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e2c28 00:30:24.985 [2024-11-20 17:51:06.756553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.756585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:24.985 [2024-11-20 17:51:06.770104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f3a28 00:30:24.985 [2024-11-20 17:51:06.771636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.985 [2024-11-20 17:51:06.771668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.781324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fa7d8 00:30:25.244 [2024-11-20 17:51:06.782522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.782572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.793360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e2c28 00:30:25.244 [2024-11-20 17:51:06.794504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.794564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.805145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e38d0 00:30:25.244 [2024-11-20 17:51:06.806071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.806131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.819632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fda78 00:30:25.244 [2024-11-20 17:51:06.821487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.821520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.828648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0788 00:30:25.244 [2024-11-20 17:51:06.829426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.829459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.843192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f46d0 00:30:25.244 [2024-11-20 17:51:06.844628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.844661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.854602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5a90 00:30:25.244 [2024-11-20 17:51:06.855721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.855756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.866574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e1b48 00:30:25.244 [2024-11-20 17:51:06.867797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.867831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.879144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f5be8 00:30:25.244 [2024-11-20 17:51:06.880266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.880304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.890603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f46d0 00:30:25.244 [2024-11-20 17:51:06.891598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.891639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.901898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e38d0 00:30:25.244 [2024-11-20 17:51:06.902728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.902759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.915967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fa7d8 00:30:25.244 [2024-11-20 17:51:06.916966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.917001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.927275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e95a0 00:30:25.244 [2024-11-20 17:51:06.928174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.928207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.938576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f6020 00:30:25.244 [2024-11-20 17:51:06.939283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.939342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:25.244 [2024-11-20 17:51:06.952179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f2948 00:30:25.244 [2024-11-20 17:51:06.953653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.244 [2024-11-20 17:51:06.953687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.245 [2024-11-20 17:51:06.963472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f7970 00:30:25.245 [2024-11-20 17:51:06.964815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.245 [2024-11-20 17:51:06.964854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:25.245 [2024-11-20 17:51:06.974709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f81e0 00:30:25.245 [2024-11-20 17:51:06.975887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.245 [2024-11-20 17:51:06.975921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:25.245 [2024-11-20 17:51:06.985938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198dfdc0 00:30:25.245 [2024-11-20 17:51:06.986976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.245 [2024-11-20 17:51:06.987009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:25.245 [2024-11-20 17:51:06.997182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ee190 00:30:25.245 [2024-11-20 17:51:06.998038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.245 [2024-11-20 17:51:06.998081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:25.245 [2024-11-20 17:51:07.008441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e7c50 00:30:25.245 [2024-11-20 17:51:07.009165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.245 [2024-11-20 17:51:07.009208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:25.245 [2024-11-20 17:51:07.023344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f8e88 00:30:25.245 [2024-11-20 17:51:07.025027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.245 [2024-11-20 17:51:07.025060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.034518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e1b48 00:30:25.504 [2024-11-20 17:51:07.036034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.036075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.045758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f35f0 00:30:25.504 [2024-11-20 17:51:07.047157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.047188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.056967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e6b70 00:30:25.504 [2024-11-20 17:51:07.058179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.058211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.068335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f7538 00:30:25.504 [2024-11-20 17:51:07.069400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.069433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.079515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198eb328 00:30:25.504 [2024-11-20 17:51:07.080412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.080443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.090690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e88f8 00:30:25.504 [2024-11-20 17:51:07.091466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.091499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.104581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f81e0 00:30:25.504 [2024-11-20 17:51:07.105533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.105565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.115821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e7818 00:30:25.504 [2024-11-20 17:51:07.116579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.116613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:25.504 [2024-11-20 17:51:07.127131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e1f80 00:30:25.504 [2024-11-20 17:51:07.127766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.504 [2024-11-20 17:51:07.127800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.140686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e01f8 00:30:25.505 [2024-11-20 17:51:07.142108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.142152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.151953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e8088 00:30:25.505 [2024-11-20 17:51:07.153204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.153236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.163308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e9e10 00:30:25.505 [2024-11-20 17:51:07.164419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.164453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.174591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f4b08 00:30:25.505 [2024-11-20 17:51:07.175564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.175597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.185879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f20d8 00:30:25.505 [2024-11-20 17:51:07.186691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.186725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.200835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ed4e8 00:30:25.505 [2024-11-20 17:51:07.202616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.202649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.212082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f9b30 00:30:25.505 [2024-11-20 17:51:07.213667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.213699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.221786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ebb98 00:30:25.505 [2024-11-20 17:51:07.222499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.222532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.235705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e6b70 00:30:25.505 [2024-11-20 17:51:07.237175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.237207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.246923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198eb328 00:30:25.505 [2024-11-20 17:51:07.248226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.248260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.258192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f2510 00:30:25.505 [2024-11-20 17:51:07.259359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.259392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.269670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5a90 00:30:25.505 [2024-11-20 17:51:07.270690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.270725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:25.505 [2024-11-20 17:51:07.281252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198df118 00:30:25.505 [2024-11-20 17:51:07.282143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.505 [2024-11-20 17:51:07.282182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:25.764 [2024-11-20 17:51:07.296503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f1868 00:30:25.764 [2024-11-20 17:51:07.298373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.298415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.308077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198eee38 00:30:25.765 [2024-11-20 17:51:07.309760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.309802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.319590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e1f80 00:30:25.765 [2024-11-20 17:51:07.321197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.321238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.329744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e3d08 00:30:25.765 [2024-11-20 17:51:07.330775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.330815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.344708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5ec8 00:30:25.765 [2024-11-20 17:51:07.346427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.346469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.355601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fbcf0 00:30:25.765 [2024-11-20 17:51:07.357599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.357643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.368588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fac10 00:30:25.765 [2024-11-20 17:51:07.369673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.369714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.379488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0350 00:30:25.765 [2024-11-20 17:51:07.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.394233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fac10 00:30:25.765 [2024-11-20 17:51:07.396114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.396190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.402778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5220 00:30:25.765 [2024-11-20 17:51:07.403706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.403750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.418318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f31b8 00:30:25.765 [2024-11-20 17:51:07.420197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.420241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.429875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f4298 00:30:25.765 [2024-11-20 17:51:07.431693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.431730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.443891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e4140 00:30:25.765 [2024-11-20 17:51:07.445490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.445523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.455050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e88f8 00:30:25.765 [2024-11-20 17:51:07.456310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.468411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e0a68 00:30:25.765 [2024-11-20 17:51:07.469375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.469433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.481490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fbcf0 00:30:25.765 [2024-11-20 17:51:07.482740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.482783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.493342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e7c50 00:30:25.765 [2024-11-20 17:51:07.495350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.495398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.506155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fef90 00:30:25.765 [2024-11-20 17:51:07.507482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.507525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.522017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f96f8 00:30:25.765 [2024-11-20 17:51:07.524039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.531561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f3a28 00:30:25.765 [2024-11-20 17:51:07.532577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.532628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.765 [2024-11-20 17:51:07.547403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ebfd0 00:30:25.765 [2024-11-20 17:51:07.549087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.765 [2024-11-20 17:51:07.549127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.556602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198df988 00:30:26.024 [2024-11-20 17:51:07.557344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.557384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.572560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fc998 00:30:26.024 [2024-11-20 17:51:07.574006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.574055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.584963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198edd58 00:30:26.024 [2024-11-20 17:51:07.586707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.586751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.597332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ee5c8 00:30:26.024 [2024-11-20 17:51:07.598102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.598142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.611635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ee5c8 00:30:26.024 [2024-11-20 17:51:07.613133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.613170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.623028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198dfdc0 00:30:26.024 [2024-11-20 17:51:07.624348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.624400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.634970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f4298 00:30:26.024 [2024-11-20 17:51:07.636818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.636860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.648021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5658 00:30:26.024 [2024-11-20 17:51:07.649161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.649199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:26.024 20729.00 IOPS, 80.97 MiB/s [2024-11-20T17:51:07.812Z] [2024-11-20 17:51:07.661417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e99d8 00:30:26.024 [2024-11-20 17:51:07.662220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.662261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.674303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fd640 00:30:26.024 [2024-11-20 17:51:07.675436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.024 [2024-11-20 17:51:07.675474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:26.024 [2024-11-20 17:51:07.690454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fc998 00:30:26.024 [2024-11-20 17:51:07.692335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.692378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.699885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e1b48 00:30:26.025 [2024-11-20 17:51:07.700734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.700776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.715775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f31b8 00:30:26.025 [2024-11-20 17:51:07.717286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.717326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.727541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0bc0 00:30:26.025 [2024-11-20 17:51:07.729533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.729573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.740545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e6fa8 00:30:26.025 [2024-11-20 17:51:07.741787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.741829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.756504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e12d8 00:30:26.025 [2024-11-20 17:51:07.758466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.758509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.765959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f1ca0 00:30:26.025 [2024-11-20 17:51:07.766932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.766974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.781929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fdeb0 00:30:26.025 [2024-11-20 17:51:07.783590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.783634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.794064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e9e10 00:30:26.025 [2024-11-20 17:51:07.796160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.796224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:26.025 [2024-11-20 17:51:07.807768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f8618 00:30:26.025 [2024-11-20 17:51:07.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.025 [2024-11-20 17:51:07.809252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:26.284 [2024-11-20 17:51:07.819800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e6300 00:30:26.284 [2024-11-20 17:51:07.820814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.284 [2024-11-20 17:51:07.820870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:26.284 [2024-11-20 17:51:07.831612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f6020 00:30:26.284 [2024-11-20 17:51:07.832467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.284 [2024-11-20 17:51:07.832505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:26.284 [2024-11-20 17:51:07.846080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f6458 00:30:26.284 [2024-11-20 17:51:07.847637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.284 [2024-11-20 17:51:07.847688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:26.284 [2024-11-20 17:51:07.857714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f8618 00:30:26.284 [2024-11-20 17:51:07.859145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.284 [2024-11-20 17:51:07.859181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:26.284 [2024-11-20 17:51:07.869746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fc998 00:30:26.284 [2024-11-20 17:51:07.870855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.870896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.881332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f2948 00:30:26.285 [2024-11-20 17:51:07.882204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.882236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.892395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ebfd0 00:30:26.285 [2024-11-20 17:51:07.894169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.894203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.905398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f20d8 00:30:26.285 [2024-11-20 17:51:07.906311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.906359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.919573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fd208 00:30:26.285 [2024-11-20 17:51:07.921651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.921685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.928189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f5be8 00:30:26.285 [2024-11-20 17:51:07.929068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.929106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.943498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e95a0 00:30:26.285 [2024-11-20 17:51:07.945395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.945427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.955051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e6b70 00:30:26.285 [2024-11-20 17:51:07.956731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.956778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.964985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e01f8 00:30:26.285 [2024-11-20 17:51:07.965733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.965764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.977150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f7da8 00:30:26.285 [2024-11-20 17:51:07.978192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.978255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.988622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fd640 00:30:26.285 [2024-11-20 17:51:07.989492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:07.989525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:07.999812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198eaef0 00:30:26.285 [2024-11-20 17:51:08.000602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:08.000635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:08.013902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fd640 00:30:26.285 [2024-11-20 17:51:08.014833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:08.014866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:08.025352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e3d08 00:30:26.285 [2024-11-20 17:51:08.026077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:08.026110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:08.036925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198efae0 00:30:26.285 [2024-11-20 17:51:08.037554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:08.037588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:08.050652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0788 00:30:26.285 [2024-11-20 17:51:08.052149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:08.052179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:26.285 [2024-11-20 17:51:08.061925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f35f0 00:30:26.285 [2024-11-20 17:51:08.063213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.285 [2024-11-20 17:51:08.063244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:26.545 [2024-11-20 17:51:08.073310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e0a68 00:30:26.545 [2024-11-20 17:51:08.074383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.545 [2024-11-20 17:51:08.074415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:26.545 [2024-11-20 17:51:08.084801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e8088 00:30:26.545 [2024-11-20 17:51:08.085711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.545 [2024-11-20 17:51:08.085743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:26.545 [2024-11-20 17:51:08.096165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ed0b0 00:30:26.545 [2024-11-20 17:51:08.096919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.545 [2024-11-20 17:51:08.096950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:26.545 [2024-11-20 17:51:08.110269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198eee38 00:30:26.545 [2024-11-20 17:51:08.111240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.545 [2024-11-20 17:51:08.111273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:26.545 [2024-11-20 17:51:08.121409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e9e10 00:30:26.545 [2024-11-20 17:51:08.122175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.122206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.132663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198eb760 00:30:26.546 [2024-11-20 17:51:08.133322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.133354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.146440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ebfd0 00:30:26.546 [2024-11-20 17:51:08.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.147927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.157612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fb480 00:30:26.546 [2024-11-20 17:51:08.158888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.158919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.169109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fda78 00:30:26.546 [2024-11-20 17:51:08.170229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.170260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.180418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0bc0 00:30:26.546 [2024-11-20 17:51:08.181395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.181428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.192121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fe720 00:30:26.546 [2024-11-20 17:51:08.192942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.193002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.206415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fb8b8 00:30:26.546 [2024-11-20 17:51:08.207486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.207536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.218276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ee5c8 00:30:26.546 [2024-11-20 17:51:08.219568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.219628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.229915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ed920 00:30:26.546 [2024-11-20 17:51:08.231226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.231256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.241957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5658 00:30:26.546 [2024-11-20 17:51:08.242792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.242825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.253396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f96f8 00:30:26.546 [2024-11-20 17:51:08.254121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.254153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.267103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0350 00:30:26.546 [2024-11-20 17:51:08.268587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.268619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.278418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0350 00:30:26.546 [2024-11-20 17:51:08.279747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.279783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.289583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e3060 00:30:26.546 [2024-11-20 17:51:08.290752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.290790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.301291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f8e88 00:30:26.546 [2024-11-20 17:51:08.302489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.302521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.313750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fc560 00:30:26.546 [2024-11-20 17:51:08.315165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.315197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:26.546 [2024-11-20 17:51:08.324832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e23b8 00:30:26.546 [2024-11-20 17:51:08.325805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.546 [2024-11-20 17:51:08.325838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:26.805 [2024-11-20 17:51:08.336391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e3060 00:30:26.805 [2024-11-20 17:51:08.337459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.805 [2024-11-20 17:51:08.337491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:26.805 [2024-11-20 17:51:08.348421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fb480 00:30:26.806 [2024-11-20 17:51:08.349470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.349504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.359714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fc560 00:30:26.806 [2024-11-20 17:51:08.360623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.360655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.370910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198df118 00:30:26.806 [2024-11-20 17:51:08.371652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.371684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.384828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e8088 00:30:26.806 [2024-11-20 17:51:08.385727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.385759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.395995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f4b08 00:30:26.806 [2024-11-20 17:51:08.396798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.396833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.407302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ddc00 00:30:26.806 [2024-11-20 17:51:08.407933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.407966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.421230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198efae0 00:30:26.806 [2024-11-20 17:51:08.422650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.422689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.434031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ee190 00:30:26.806 [2024-11-20 17:51:08.435861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.435899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.442611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198df118 00:30:26.806 [2024-11-20 17:51:08.443422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.443458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.456934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f9b30 00:30:26.806 [2024-11-20 17:51:08.458400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.458434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.467978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f2510 00:30:26.806 [2024-11-20 17:51:08.469097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.469129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.479623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fd208 00:30:26.806 [2024-11-20 17:51:08.480792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.480825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.491997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ed920 00:30:26.806 [2024-11-20 17:51:08.493169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.493200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.503374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f9b30 00:30:26.806 [2024-11-20 17:51:08.504377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.504410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.514652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f3e60 00:30:26.806 [2024-11-20 17:51:08.515521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.515553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.528158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fac10 00:30:26.806 [2024-11-20 17:51:08.529486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.529520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.539194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e5ec8 00:30:26.806 [2024-11-20 17:51:08.540161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.540197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.550784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198fef90 00:30:26.806 [2024-11-20 17:51:08.551830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.551864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.564996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f81e0 00:30:26.806 [2024-11-20 17:51:08.566710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.566746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.573418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f35f0 00:30:26.806 [2024-11-20 17:51:08.574158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.574189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:26.806 [2024-11-20 17:51:08.587566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e38d0 00:30:26.806 [2024-11-20 17:51:08.589050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.806 [2024-11-20 17:51:08.589090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:27.064 [2024-11-20 17:51:08.598941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198e9e10 00:30:27.064 [2024-11-20 17:51:08.599714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.064 [2024-11-20 17:51:08.599746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:27.064 [2024-11-20 17:51:08.613219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f8618 00:30:27.064 [2024-11-20 17:51:08.614651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.064 [2024-11-20 17:51:08.614686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:27.064 [2024-11-20 17:51:08.625764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f1868 00:30:27.064 [2024-11-20 17:51:08.627361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.064 [2024-11-20 17:51:08.627394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:27.064 [2024-11-20 17:51:08.636247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198f0bc0 00:30:27.064 [2024-11-20 17:51:08.638032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.064 [2024-11-20 17:51:08.638079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:27.064 [2024-11-20 17:51:08.649238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bd20) with pdu=0x2000198ea680 00:30:27.064 [2024-11-20 17:51:08.650245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.064 [2024-11-20 17:51:08.650278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:27.064 20842.50 IOPS, 81.42 MiB/s 00:30:27.064 Latency(us) 00:30:27.064 [2024-11-20T17:51:08.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.064 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.064 nvme0n1 : 2.00 20855.96 81.47 0.00 0.00 6129.67 2472.49 17277.67 00:30:27.064 [2024-11-20T17:51:08.852Z] =================================================================================================================== 00:30:27.064 [2024-11-20T17:51:08.852Z] Total : 20855.96 81.47 0.00 0.00 6129.67 2472.49 17277.67 00:30:27.064 { 00:30:27.064 "results": [ 00:30:27.064 { 00:30:27.064 "job": "nvme0n1", 00:30:27.064 "core_mask": "0x2", 00:30:27.064 "workload": "randwrite", 00:30:27.064 "status": "finished", 00:30:27.064 "queue_depth": 128, 00:30:27.064 "io_size": 4096, 00:30:27.064 "runtime": 2.004847, 00:30:27.064 "iops": 20855.9555916237, 00:30:27.064 "mibps": 81.46857652978008, 00:30:27.064 "io_failed": 0, 00:30:27.064 "io_timeout": 0, 00:30:27.064 "avg_latency_us": 6129.667032306177, 00:30:27.064 "min_latency_us": 2472.4945454545455, 00:30:27.064 "max_latency_us": 17277.672727272726 00:30:27.064 } 00:30:27.064 ], 00:30:27.064 "core_count": 1 00:30:27.064 } 00:30:27.064 17:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:27.064 17:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:27.064 17:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:27.064 | .driver_specific 00:30:27.064 | .nvme_error 00:30:27.064 | .status_code 00:30:27.064 | .command_transient_transport_error' 00:30:27.064 17:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 115570 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 115570 ']' 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 115570 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115570 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:27.321 killing process with pid 115570 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115570' 00:30:27.321 Received shutdown signal, test time was about 2.000000 seconds 00:30:27.321 00:30:27.321 Latency(us) 00:30:27.321 [2024-11-20T17:51:09.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.321 [2024-11-20T17:51:09.109Z] =================================================================================================================== 00:30:27.321 [2024-11-20T17:51:09.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 115570 00:30:27.321 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 115570 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=115661 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 115661 /var/tmp/bperf.sock 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 115661 ']' 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.579 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:27.579 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:27.579 Zero copy mechanism will not be used. 00:30:27.579 [2024-11-20 17:51:09.322110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:27.579 [2024-11-20 17:51:09.322208] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115661 ] 00:30:27.837 [2024-11-20 17:51:09.461668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.837 [2024-11-20 17:51:09.558635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.095 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:28.095 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:28.095 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:28.095 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:28.354 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:28.354 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.354 17:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.354 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.354 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.354 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.612 nvme0n1 00:30:28.612 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:28.612 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.612 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.871 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.871 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:28.871 17:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.871 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:28.871 Zero copy mechanism will not be used. 00:30:28.871 Running I/O for 2 seconds... 00:30:28.871 [2024-11-20 17:51:10.514257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.871 [2024-11-20 17:51:10.514644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.871 [2024-11-20 17:51:10.514681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.871 [2024-11-20 17:51:10.520056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.871 [2024-11-20 17:51:10.520411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.871 [2024-11-20 17:51:10.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.871 [2024-11-20 17:51:10.525566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.871 [2024-11-20 17:51:10.525916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.525951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.531101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.531459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.531495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.536709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.537092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.537128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.542391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.542793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.542828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.548075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.548446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.548480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.553662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.554020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.554054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.559301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.559644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.559679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.566211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.566600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.566635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.571850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.572215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.572249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.577381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.577729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.577764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.582925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.583291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.583326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.588487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.588828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.588862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.593899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.594273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.594307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.599578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.599937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.599971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.605115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.605456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.605490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.610611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.610966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.610999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.616116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.616466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.616500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.621686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.622047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.622093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.627352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.627693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.627726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.633448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.633799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.633834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.639211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.639605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.639633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.644962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.645327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.645366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.650657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.651139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.651193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.872 [2024-11-20 17:51:10.656308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:28.872 [2024-11-20 17:51:10.656604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.872 [2024-11-20 17:51:10.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.132 [2024-11-20 17:51:10.661388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.661631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.661670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.666114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.666315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.666354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.670785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.670995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.671032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.675559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.675752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.675781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.680267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.680465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.680493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.684939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.685161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.685190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.689619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.689818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.689845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.694281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.694448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.694471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.699015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.699194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.699217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.703802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.703987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.704009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.708605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.708770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.708792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.713365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.713531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.713553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.718049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.718227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.718250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.722730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.722924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.722946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.727725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.727902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.727925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.732580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.732748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.732770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.737395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.737565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.737589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.742176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.742347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.742369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.746946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.747128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.747151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.751676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.751840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.751862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.756389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.756553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.756575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.761095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.761260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.761283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.765744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.765920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.765942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.770511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.770678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.770702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.775253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.775419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.133 [2024-11-20 17:51:10.775442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.133 [2024-11-20 17:51:10.779949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.133 [2024-11-20 17:51:10.780129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.780152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.784602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.784770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.784792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.789279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.789444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.789467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.793977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.794159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.794182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.798670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.798842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.798864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.803414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.803589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.803611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.808228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.808393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.808415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.812908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.813087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.813111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.817640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.817806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.817829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.822390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.822556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.822578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.827131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.827303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.827325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.831838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.832004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.832026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.836548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.836723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.836746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.841363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.841529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.841552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.846150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.846315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.846337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.850959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.851138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.851162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.855632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.855804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.855827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.860402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.860566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.860589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.865117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.865291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.865313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.870717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.870897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.870920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.875625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.875793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.875815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.880413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.880580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.880602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.885212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.885378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.885400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.889966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.890144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.890166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.894685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.894852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.894874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.899395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.899587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.899609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.904138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.134 [2024-11-20 17:51:10.904302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.134 [2024-11-20 17:51:10.904324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.134 [2024-11-20 17:51:10.908812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.135 [2024-11-20 17:51:10.908992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.135 [2024-11-20 17:51:10.909020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.135 [2024-11-20 17:51:10.913492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.135 [2024-11-20 17:51:10.913674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.135 [2024-11-20 17:51:10.913698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.135 [2024-11-20 17:51:10.918187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.135 [2024-11-20 17:51:10.918355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.135 [2024-11-20 17:51:10.918379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.922844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.923031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.923054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.927570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.927736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.927759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.932270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.932431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.932454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.936968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.937149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.937172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.941679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.941842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.941871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.946339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.946517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.946545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.951123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.951294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.951322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.955858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.956025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.956053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.960601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.960765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.960787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.965337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.965500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.965522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.970008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.970187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.970215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.974775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.974952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.974982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.979569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.979734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.979763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.984261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.984436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.984464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.988970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.989158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.989181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.993665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.993829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.993851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:10.998433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:10.998598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:10.998627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.003143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.003305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.003327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.007805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.007984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.008013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.012454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.012630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.012657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.017151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.017318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.017341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.021857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.022023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.022046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.026584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.026751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.031300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.031479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.031508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.396 [2024-11-20 17:51:11.036030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.396 [2024-11-20 17:51:11.036216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.396 [2024-11-20 17:51:11.036246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.040809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.040977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.041006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.045592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.045772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.045803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.050286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.050467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.050497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.055002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.055209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.059734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.059905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.059928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.064470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.064641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.064672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.069231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.069397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.069427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.073912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.074094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.074120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.078648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.078821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.078845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.083377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.083558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.083589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.088135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.088306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.088335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.092945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.093144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.093173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.097854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.098021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.098050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.102668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.102849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.102886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.107405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.107587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.107619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.112142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.112313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.112342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.116915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.117096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.117140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.121749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.121933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.121963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.126497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.126680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.126710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.131303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.131481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.131519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.136030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.136244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.136273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.140845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.141013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.141042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.145673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.145848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.145877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.150553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.150720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.150748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.155300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.155461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.155483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.160001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.160174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.160197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.397 [2024-11-20 17:51:11.164689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.397 [2024-11-20 17:51:11.164866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-20 17:51:11.164888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.398 [2024-11-20 17:51:11.169283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.398 [2024-11-20 17:51:11.169460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-20 17:51:11.169482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.398 [2024-11-20 17:51:11.173972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.398 [2024-11-20 17:51:11.174155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-20 17:51:11.174179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.398 [2024-11-20 17:51:11.178755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.398 [2024-11-20 17:51:11.178934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-20 17:51:11.178965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.183504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.183699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.183726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.188289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.188453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.188476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.193083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.193260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.193288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.197851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.198027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.198049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.202734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.202930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.202953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.207447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.207620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.207648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.212256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.212426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.212453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.217044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.217223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.217249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.221853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.222015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.222038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.226829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.226998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.227021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.231710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.231879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.231901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.236730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.236900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.236923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.241448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.241610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.241634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.246318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.246483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.246505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.251184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.251352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.251373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.256099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.256292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.256314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.260956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.261166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.261190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.265918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.266120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.266157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.270848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.658 [2024-11-20 17:51:11.271102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.658 [2024-11-20 17:51:11.271125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.658 [2024-11-20 17:51:11.275679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.275863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.275885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.280518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.280713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.280750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.285638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.285819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.285842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.290515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.290689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.290710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.295461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.295649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.295672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.300390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.300597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.300620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.305342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.305541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.305563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.310028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.310236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.310257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.314791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.315015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.315037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.319697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.319883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.319906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.324620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.324808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.324835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.329484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.329670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.329692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.334574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.334741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.334763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.339540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.339716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.339738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.344460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.344692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.344713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.349380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.349568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.349591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.354212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.354402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.354441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.359309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.359475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.359498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.364381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.364586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.364610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.369289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.369481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.369519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.374277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.374462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.374483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.379316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.379515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.379539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.384318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.384484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.384507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.389073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.389312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.389334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.394003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.394238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.394275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.398930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.399119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.399142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.659 [2024-11-20 17:51:11.403809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.659 [2024-11-20 17:51:11.403985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.659 [2024-11-20 17:51:11.404007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.408621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.408832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.408855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.413476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.413724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.418281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.418448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.418470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.423145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.423320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.423341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.427957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.428145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.428179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.432952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.433190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.433229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.437868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.438051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.438072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.660 [2024-11-20 17:51:11.443102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.660 [2024-11-20 17:51:11.443288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-20 17:51:11.443309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.447966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.448163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.448185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.452768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.452958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.452981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.457748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.457914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.457936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.462529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.462695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.462717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.467297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.467460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.467483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.472042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.472216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.472239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.477128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.477306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.477328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.481942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.482121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.482144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.486679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.486858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.486890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.491412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.491578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.491600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.496083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.496289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.496331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.500786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.920 [2024-11-20 17:51:11.500964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.920 [2024-11-20 17:51:11.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.920 [2024-11-20 17:51:11.505562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.505740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.505765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.921 6296.00 IOPS, 787.00 MiB/s [2024-11-20T17:51:11.709Z] [2024-11-20 17:51:11.511299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.511471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.511503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.516012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.516175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.516200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.520718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.520871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.520893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.525741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.525889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.525912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.530635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.530796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.530818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.535707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.535838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.535874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.540814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.541012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.541034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.545740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.545930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.545973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.550714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.550866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.550928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.555316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.555531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.555554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.559986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.560133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.560154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.565053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.565230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.565253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.569961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.570181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.570203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.574908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.575082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.575104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.579632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.579772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.579794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.584501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.584721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.584743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.589397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.589527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.589550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.594109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.594263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.594285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.598780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.598983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.599010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.603547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.603690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.603712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.608269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.608400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.608422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.613251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.613426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.613454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.617964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.618127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.921 [2024-11-20 17:51:11.618149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.921 [2024-11-20 17:51:11.622821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.921 [2024-11-20 17:51:11.622945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.622967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.627656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.627833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.627862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.632634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.632806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.632828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.637277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.637437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.637459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.642089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.642241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.642264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.647002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.647166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.647189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.651813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.651985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.652006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.656909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.657048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.657087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.662029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.662156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.662178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.667939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.668088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.668111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.672992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.673167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.673190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.678141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.678295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.678317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.683464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.683618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.683639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.688346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.688528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.688549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.693313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.693465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.693486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.698161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.698351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.698372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.922 [2024-11-20 17:51:11.703037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:29.922 [2024-11-20 17:51:11.703221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.922 [2024-11-20 17:51:11.703242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.197 [2024-11-20 17:51:11.708038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.197 [2024-11-20 17:51:11.708243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.197 [2024-11-20 17:51:11.708278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.197 [2024-11-20 17:51:11.712890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.197 [2024-11-20 17:51:11.713054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.197 [2024-11-20 17:51:11.713091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.197 [2024-11-20 17:51:11.717616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.717744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.717767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.722322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.722438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.722459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.727216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.727407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.727444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.732203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.732371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.732392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.737242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.737403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.737423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.742116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.742251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.742273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.747005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.747184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.747207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.751923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.198 [2024-11-20 17:51:11.752094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.198 [2024-11-20 17:51:11.752132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.198 [2024-11-20 17:51:11.756755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.199 [2024-11-20 17:51:11.756894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.199 [2024-11-20 17:51:11.756917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.199 [2024-11-20 17:51:11.761717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.199 [2024-11-20 17:51:11.761859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.199 [2024-11-20 17:51:11.761882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.199 [2024-11-20 17:51:11.766585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.199 [2024-11-20 17:51:11.766779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.199 [2024-11-20 17:51:11.766802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.199 [2024-11-20 17:51:11.771309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.199 [2024-11-20 17:51:11.771480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.199 [2024-11-20 17:51:11.771502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.199 [2024-11-20 17:51:11.776030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.199 [2024-11-20 17:51:11.776203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.199 [2024-11-20 17:51:11.776225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.199 [2024-11-20 17:51:11.780723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.199 [2024-11-20 17:51:11.780845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.199 [2024-11-20 17:51:11.780868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.199 [2024-11-20 17:51:11.785499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.199 [2024-11-20 17:51:11.785675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.199 [2024-11-20 17:51:11.785696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.200 [2024-11-20 17:51:11.790232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.200 [2024-11-20 17:51:11.790395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.200 [2024-11-20 17:51:11.790417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.200 [2024-11-20 17:51:11.794870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.200 [2024-11-20 17:51:11.795048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.200 [2024-11-20 17:51:11.795083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.200 [2024-11-20 17:51:11.799543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.200 [2024-11-20 17:51:11.799687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.200 [2024-11-20 17:51:11.799709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.200 [2024-11-20 17:51:11.804241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.200 [2024-11-20 17:51:11.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.200 [2024-11-20 17:51:11.804416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.200 [2024-11-20 17:51:11.808942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.200 [2024-11-20 17:51:11.809088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.200 [2024-11-20 17:51:11.809111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.200 [2024-11-20 17:51:11.813607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.200 [2024-11-20 17:51:11.813768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.201 [2024-11-20 17:51:11.813789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.201 [2024-11-20 17:51:11.818251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.201 [2024-11-20 17:51:11.818412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.201 [2024-11-20 17:51:11.818433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.201 [2024-11-20 17:51:11.822965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.201 [2024-11-20 17:51:11.823146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.201 [2024-11-20 17:51:11.823168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.201 [2024-11-20 17:51:11.827825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.201 [2024-11-20 17:51:11.827929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.201 [2024-11-20 17:51:11.827950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.201 [2024-11-20 17:51:11.832565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.201 [2024-11-20 17:51:11.832734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.201 [2024-11-20 17:51:11.832760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.201 [2024-11-20 17:51:11.837369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.201 [2024-11-20 17:51:11.837520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.201 [2024-11-20 17:51:11.837545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.201 [2024-11-20 17:51:11.842151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.204 [2024-11-20 17:51:11.842367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.204 [2024-11-20 17:51:11.842397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.204 [2024-11-20 17:51:11.846866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.847024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.847048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.851657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.851864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.851887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.856414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.856547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.856572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.861174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.861312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.861335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.865990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.866117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.866144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.870994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.871219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.875816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.875938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.875960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.880662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.880788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.880810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.885373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.885548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.885588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.890130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.890290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.890317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.894904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.895091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.895114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.899729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.899913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.899935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.904548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.904711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.904733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.909367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.909490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.909513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.914038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.914234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.914256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.918806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.918980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.919001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.923553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.923670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.923692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.928277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.928457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.928479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.932950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.933122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.933144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.937769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.937940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.937969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.942766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.942957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.942979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.947644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.205 [2024-11-20 17:51:11.947775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.205 [2024-11-20 17:51:11.947797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.205 [2024-11-20 17:51:11.952618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.206 [2024-11-20 17:51:11.952819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.206 [2024-11-20 17:51:11.952841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.206 [2024-11-20 17:51:11.957427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.206 [2024-11-20 17:51:11.957671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.206 [2024-11-20 17:51:11.957694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.206 [2024-11-20 17:51:11.962267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.206 [2024-11-20 17:51:11.962457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.206 [2024-11-20 17:51:11.962479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.206 [2024-11-20 17:51:11.967097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.206 [2024-11-20 17:51:11.967241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.206 [2024-11-20 17:51:11.967263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.206 [2024-11-20 17:51:11.971840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.206 [2024-11-20 17:51:11.972020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.206 [2024-11-20 17:51:11.972042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.206 [2024-11-20 17:51:11.976687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.206 [2024-11-20 17:51:11.976849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.206 [2024-11-20 17:51:11.976870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:11.981514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:11.981645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:11.981667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:11.986372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:11.986540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:11.986562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:11.991289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:11.991456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:11.991479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:11.996010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:11.996201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:11.996223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.000846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.001032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.001053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.005643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.005806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.005828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.010583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.010722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.010750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.015579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.015740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.015762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.020522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.020697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.020719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.025437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.025605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.025626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.030448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.030678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.030700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.035517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.035740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.035761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.040582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.040763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.040784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.045627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.045779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.045801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.050701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.050834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.050857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.055812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.056003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.056025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.060776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.060899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.060921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.065764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.469 [2024-11-20 17:51:12.065978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.469 [2024-11-20 17:51:12.065999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.469 [2024-11-20 17:51:12.070826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.071018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.071040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.075864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.076019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.076046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.081041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.081232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.081254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.086076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.086270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.086292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.090900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.091049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.091096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.095934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.096121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.096143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.100861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.101038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.101060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.105632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.105772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.105795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.110641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.110844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.110866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.115712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.115896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.115918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.120578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.120746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.120768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.125610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.125800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.125822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.130555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.130766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.130787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.135571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.135719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.135741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.140304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.140481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.140503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.145117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.145240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.145261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.149964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.150104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.150126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.154919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.155056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.155093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.159856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.160007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.160029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.164756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.164901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.164923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.169544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.169732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.169753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.174411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.174648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.174685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.179478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.179671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.179693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.184561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.184753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.184775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.189450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.189653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.189690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.194478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.194642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.194679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.199469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.470 [2024-11-20 17:51:12.199637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.470 [2024-11-20 17:51:12.199682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.470 [2024-11-20 17:51:12.204259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.204390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.204412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.209015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.209144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.209167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.213947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.214161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.214184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.218797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.218969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.218991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.223568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.223745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.223766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.228513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.228666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.228687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.233375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.233537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.233559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.238307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.238540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.238561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.243248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.243413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.243435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.248154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.248354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.248376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.471 [2024-11-20 17:51:12.253048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.471 [2024-11-20 17:51:12.253216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.471 [2024-11-20 17:51:12.253238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.257795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.257956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.257978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.262677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.262817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.262840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.267415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.267582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.267605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.272184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.272334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.272357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.276883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.277101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.277123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.281651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.281819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.281841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.286435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.286583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.286605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.291193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.291305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.291328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.295924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.296098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.296121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.300671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.300823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.300851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.305415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.305546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.305575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.310086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.310285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.310313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.731 [2024-11-20 17:51:12.314797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.731 [2024-11-20 17:51:12.315002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.731 [2024-11-20 17:51:12.315025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.319503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.319669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.319690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.324241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.324403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.324430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.328912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.329093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.329116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.333666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.333799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.333820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.338366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.338526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.338548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.343055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.343266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.343288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.347789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.347911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.347933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.352472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.352675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.352696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.357144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.357333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.357362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.361818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.361988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.362010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.366518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.366695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.366717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.371299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.371475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.371496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.375987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.376148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.376170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.380651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.380813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.380837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.385325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.385494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.385515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.389974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.390183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.390212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.394706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.394892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.394914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.399444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.399585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.399608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.404177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.404345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.404367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.408839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.409020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.409042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.413528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.413693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.413715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.418248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.418415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.418437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.422925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.423125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.427622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.427776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.427798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.432315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.432448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.432470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.732 [2024-11-20 17:51:12.437048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.732 [2024-11-20 17:51:12.437194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.732 [2024-11-20 17:51:12.437215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.441796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.441969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.441991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.446506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.446673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.446695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.451238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.451381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.451403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.455911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.456039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.456079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.460633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.460794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.460825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.465333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.465448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.465471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.470004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.470176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.470198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.474703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.474874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.474916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.479396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.479573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.484096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.484280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.484301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.488742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.488859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.493442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.493626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.493649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.498181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.498383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.498407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.733 [2024-11-20 17:51:12.502926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.503110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.503133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.733 6350.00 IOPS, 793.75 MiB/s [2024-11-20T17:51:12.521Z] [2024-11-20 17:51:12.508821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1bec0) with pdu=0x2000198fef90 00:30:30.733 [2024-11-20 17:51:12.509091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.733 [2024-11-20 17:51:12.509133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.733 00:30:30.733 Latency(us) 00:30:30.733 [2024-11-20T17:51:12.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.733 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:30.733 nvme0n1 : 2.00 6347.31 793.41 0.00 0.00 2514.54 1504.35 6464.23 00:30:30.733 [2024-11-20T17:51:12.521Z] =================================================================================================================== 00:30:30.733 [2024-11-20T17:51:12.521Z] Total : 6347.31 793.41 0.00 0.00 2514.54 1504.35 6464.23 00:30:30.733 { 00:30:30.733 "results": [ 00:30:30.733 { 00:30:30.733 "job": "nvme0n1", 00:30:30.733 "core_mask": "0x2", 00:30:30.733 "workload": "randwrite", 00:30:30.733 "status": "finished", 00:30:30.733 "queue_depth": 16, 00:30:30.733 "io_size": 131072, 00:30:30.733 "runtime": 2.004315, 00:30:30.733 "iops": 6347.305687978187, 00:30:30.733 "mibps": 793.4132109972734, 00:30:30.733 "io_failed": 0, 00:30:30.733 "io_timeout": 0, 00:30:30.733 "avg_latency_us": 2514.544240328136, 00:30:30.733 "min_latency_us": 1504.3490909090908, 00:30:30.733 "max_latency_us": 6464.232727272727 00:30:30.733 } 00:30:30.733 ], 00:30:30.733 "core_count": 1 00:30:30.733 } 00:30:30.991 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:30.991 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:30.991 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:30.991 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:30.991 | .driver_specific 00:30:30.991 | .nvme_error 00:30:30.991 | .status_code 00:30:30.991 | .command_transient_transport_error' 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 410 > 0 )) 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 115661 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 115661 ']' 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 115661 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115661 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115661' 00:30:31.249 killing process with pid 115661 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 115661 00:30:31.249 Received shutdown signal, test time was about 2.000000 seconds 00:30:31.249 00:30:31.249 Latency(us) 00:30:31.249 [2024-11-20T17:51:13.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.249 [2024-11-20T17:51:13.037Z] =================================================================================================================== 00:30:31.249 [2024-11-20T17:51:13.037Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.249 17:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 115661 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 115345 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 115345 ']' 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 115345 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115345 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:31.508 killing process with pid 115345 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115345' 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 115345 00:30:31.508 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 115345 00:30:31.766 00:30:31.766 real 0m19.090s 00:30:31.766 user 0m37.234s 00:30:31.766 sys 0m4.796s 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.766 ************************************ 00:30:31.766 END TEST nvmf_digest_error 00:30:31.766 ************************************ 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.766 rmmod nvme_tcp 00:30:31.766 rmmod nvme_fabrics 00:30:31.766 rmmod nvme_keyring 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 115345 ']' 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 115345 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 115345 ']' 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 115345 00:30:31.766 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (115345) - No such process 00:30:31.766 Process with pid 115345 is not found 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 115345 is not found' 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:31.766 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:30:32.025 00:30:32.025 real 0m39.620s 00:30:32.025 user 1m15.503s 00:30:32.025 sys 0m10.383s 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:32.025 17:51:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:32.025 ************************************ 00:30:32.025 END TEST nvmf_digest 00:30:32.025 ************************************ 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.284 ************************************ 00:30:32.284 START TEST nvmf_mdns_discovery 00:30:32.284 ************************************ 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:30:32.284 * Looking for test storage... 00:30:32.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:32.284 17:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.284 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:32.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.285 --rc genhtml_branch_coverage=1 00:30:32.285 --rc genhtml_function_coverage=1 00:30:32.285 --rc genhtml_legend=1 00:30:32.285 --rc geninfo_all_blocks=1 00:30:32.285 --rc geninfo_unexecuted_blocks=1 00:30:32.285 00:30:32.285 ' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:32.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.285 --rc genhtml_branch_coverage=1 00:30:32.285 --rc genhtml_function_coverage=1 00:30:32.285 --rc genhtml_legend=1 00:30:32.285 --rc geninfo_all_blocks=1 00:30:32.285 --rc geninfo_unexecuted_blocks=1 00:30:32.285 00:30:32.285 ' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:32.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.285 --rc genhtml_branch_coverage=1 00:30:32.285 --rc genhtml_function_coverage=1 00:30:32.285 --rc genhtml_legend=1 00:30:32.285 --rc geninfo_all_blocks=1 00:30:32.285 --rc geninfo_unexecuted_blocks=1 00:30:32.285 00:30:32.285 ' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:32.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.285 --rc genhtml_branch_coverage=1 00:30:32.285 --rc genhtml_function_coverage=1 00:30:32.285 --rc genhtml_legend=1 00:30:32.285 --rc geninfo_all_blocks=1 00:30:32.285 --rc geninfo_unexecuted_blocks=1 00:30:32.285 00:30:32.285 ' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:32.285 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:32.285 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:32.286 Cannot find device "nvmf_init_br" 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:30:32.286 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:32.544 Cannot find device "nvmf_init_br2" 00:30:32.544 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:32.545 Cannot find device "nvmf_tgt_br" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:32.545 Cannot find device "nvmf_tgt_br2" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:32.545 Cannot find device "nvmf_init_br" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:32.545 Cannot find device "nvmf_init_br2" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:32.545 Cannot find device "nvmf_tgt_br" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:32.545 Cannot find device "nvmf_tgt_br2" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:32.545 Cannot find device "nvmf_br" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:32.545 Cannot find device "nvmf_init_if" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:32.545 Cannot find device "nvmf_init_if2" 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:32.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:32.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:32.545 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:32.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:32.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:30:32.804 00:30:32.804 --- 10.0.0.3 ping statistics --- 00:30:32.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.804 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:32.804 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:32.804 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:30:32.804 00:30:32.804 --- 10.0.0.4 ping statistics --- 00:30:32.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.804 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:32.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:30:32.804 00:30:32.804 --- 10.0.0.1 ping statistics --- 00:30:32.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.804 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:32.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:30:32.804 00:30:32.804 --- 10.0.0.2 ping statistics --- 00:30:32.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.804 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@457 -- # return 0 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@505 -- # nvmfpid=116000 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@506 -- # waitforlisten 116000 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 116000 ']' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:32.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:32.804 17:51:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.804 [2024-11-20 17:51:14.519361] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:32.804 [2024-11-20 17:51:14.519452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.062 [2024-11-20 17:51:14.661199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.063 [2024-11-20 17:51:14.756046] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.063 [2024-11-20 17:51:14.756110] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.063 [2024-11-20 17:51:14.756126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.063 [2024-11-20 17:51:14.756137] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.063 [2024-11-20 17:51:14.756146] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.063 [2024-11-20 17:51:14.756178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.998 [2024-11-20 17:51:15.752569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.998 [2024-11-20 17:51:15.760794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.998 null0 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.998 null1 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.998 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.257 null2 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.257 null3 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=116050 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 116050 /tmp/host.sock 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 116050 ']' 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:34.257 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:34.257 17:51:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.257 [2024-11-20 17:51:15.858695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:34.257 [2024-11-20 17:51:15.858804] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116050 ] 00:30:34.257 [2024-11-20 17:51:15.998148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.516 [2024-11-20 17:51:16.097230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.516 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.516 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:34.516 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:30:34.516 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:30:34.516 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:30:34.774 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=116067 00:30:34.774 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:30:34.774 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:30:34.775 17:51:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:30:34.775 Process 1059 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:30:34.775 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:30:34.775 Successfully dropped root privileges. 00:30:34.775 avahi-daemon 0.8 starting up. 00:30:34.775 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:30:34.775 Successfully called chroot(). 00:30:34.775 Successfully dropped remaining capabilities. 00:30:35.710 No service file found in /etc/avahi/services. 00:30:35.710 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:30:35.710 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:30:35.710 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:30:35.710 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:30:35.710 Network interface enumeration completed. 00:30:35.710 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:30:35.710 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:30:35.710 Registering new address record for fe80::c42:98ff:fe7c:a092 on nvmf_tgt_if.*. 00:30:35.710 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:30:35.710 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3861654755. 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:35.710 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.969 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:30:35.969 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:30:35.969 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.969 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 [2024-11-20 17:51:17.653646] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 [2024-11-20 17:51:17.722813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.970 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.230 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.230 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:30:36.230 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.230 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.230 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.230 17:51:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:30:36.797 [2024-11-20 17:51:18.553637] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:30:37.364 [2024-11-20 17:51:18.953688] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:37.364 [2024-11-20 17:51:18.953734] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:37.364 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:37.364 cookie is 0 00:30:37.364 is_local: 1 00:30:37.364 our_own: 0 00:30:37.364 wide_area: 0 00:30:37.364 multicast: 1 00:30:37.364 cached: 1 00:30:37.364 [2024-11-20 17:51:19.053641] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:37.364 [2024-11-20 17:51:19.053676] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:37.364 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:37.364 cookie is 0 00:30:37.364 is_local: 1 00:30:37.364 our_own: 0 00:30:37.364 wide_area: 0 00:30:37.364 multicast: 1 00:30:37.364 cached: 1 00:30:38.301 [2024-11-20 17:51:19.954742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.301 [2024-11-20 17:51:19.954808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffbff0 with addr=10.0.0.4, port=8009 00:30:38.301 [2024-11-20 17:51:19.954847] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:38.301 [2024-11-20 17:51:19.954861] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:38.301 [2024-11-20 17:51:19.954873] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:30:38.301 [2024-11-20 17:51:20.067740] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:30:38.301 [2024-11-20 17:51:20.067778] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:30:38.301 [2024-11-20 17:51:20.067797] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:38.559 [2024-11-20 17:51:20.153873] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:30:38.559 [2024-11-20 17:51:20.211123] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:30:38.559 [2024-11-20 17:51:20.211158] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:39.497 [2024-11-20 17:51:20.954665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.497 [2024-11-20 17:51:20.954745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20334d0 with addr=10.0.0.4, port=8009 00:30:39.497 [2024-11-20 17:51:20.954769] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:39.497 [2024-11-20 17:51:20.954780] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:39.497 [2024-11-20 17:51:20.954790] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:30:40.434 [2024-11-20 17:51:21.954612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.434 [2024-11-20 17:51:21.954690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201c270 with addr=10.0.0.4, port=8009 00:30:40.434 [2024-11-20 17:51:21.954713] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:40.434 [2024-11-20 17:51:21.954724] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:40.434 [2024-11-20 17:51:21.954733] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:30:41.002 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:30:41.002 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:41.002 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:30:41.002 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:41.002 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:30:41.002 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:41.002 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:41.261 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:41.261 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:41.261 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.261 [2024-11-20 17:51:22.813015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:30:41.261 [2024-11-20 17:51:22.815851] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:41.261 [2024-11-20 17:51:22.815905] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.261 [2024-11-20 17:51:22.820831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:30:41.261 [2024-11-20 17:51:22.821841] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.261 17:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:30:41.261 [2024-11-20 17:51:22.953046] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:41.261 [2024-11-20 17:51:22.953176] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:41.261 [2024-11-20 17:51:22.958319] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:30:41.261 [2024-11-20 17:51:22.958344] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:30:41.261 [2024-11-20 17:51:22.958362] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:41.261 [2024-11-20 17:51:23.039464] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:41.261 [2024-11-20 17:51:23.044425] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:30:41.520 [2024-11-20 17:51:23.100922] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:30:41.520 [2024-11-20 17:51:23.100951] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:30:42.088 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:30:42.088 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:42.088 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:30:42.088 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:42.089 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:42.089 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:42.089 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:42.089 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:42.089 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:42.089 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:42.089 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.089 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.352 [2024-11-20 17:51:23.953701] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:42.352 [2024-11-20 17:51:23.953765] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:42.352 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:42.352 cookie is 0 00:30:42.352 is_local: 1 00:30:42.352 our_own: 0 00:30:42.352 wide_area: 0 00:30:42.352 multicast: 1 00:30:42.352 cached: 1 00:30:42.352 [2024-11-20 17:51:23.953780] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:42.352 17:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:42.352 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:42.610 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.610 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.610 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.610 [2024-11-20 17:51:24.153690] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:42.610 [2024-11-20 17:51:24.153721] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:42.610 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:42.610 cookie is 0 00:30:42.610 is_local: 1 00:30:42.610 our_own: 0 00:30:42.610 wide_area: 0 00:30:42.610 multicast: 1 00:30:42.610 cached: 1 00:30:42.610 [2024-11-20 17:51:24.153734] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:30:42.610 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:30:42.610 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.611 17:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.545 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.804 [2024-11-20 17:51:25.366582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:30:43.804 [2024-11-20 17:51:25.366926] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:43.804 [2024-11-20 17:51:25.366960] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:43.804 [2024-11-20 17:51:25.366995] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:43.804 [2024-11-20 17:51:25.367010] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.804 [2024-11-20 17:51:25.374390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:30:43.804 [2024-11-20 17:51:25.374918] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:43.804 [2024-11-20 17:51:25.374982] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.804 17:51:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:30:43.804 [2024-11-20 17:51:25.507055] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:30:43.804 [2024-11-20 17:51:25.507600] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:30:43.804 [2024-11-20 17:51:25.571975] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:30:43.804 [2024-11-20 17:51:25.572024] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:43.804 [2024-11-20 17:51:25.572033] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:43.804 [2024-11-20 17:51:25.572078] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:43.804 [2024-11-20 17:51:25.572652] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:30:43.804 [2024-11-20 17:51:25.572674] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:30:43.804 [2024-11-20 17:51:25.572681] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:43.804 [2024-11-20 17:51:25.572698] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:44.062 [2024-11-20 17:51:25.618190] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:30:44.062 [2024-11-20 17:51:25.618216] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:44.062 [2024-11-20 17:51:25.619189] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:30:44.062 [2024-11-20 17:51:25.619208] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:44.629 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.887 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.147 [2024-11-20 17:51:26.727891] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:45.147 [2024-11-20 17:51:26.727955] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:45.147 [2024-11-20 17:51:26.727991] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:45.147 [2024-11-20 17:51:26.728006] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:45.147 [2024-11-20 17:51:26.729196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.147 [2024-11-20 17:51:26.729232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-20 17:51:26.729246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.147 [2024-11-20 17:51:26.729256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-20 17:51:26.729266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.147 [2024-11-20 17:51:26.729275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-20 17:51:26.729285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.147 [2024-11-20 17:51:26.729295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-20 17:51:26.729304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.147 [2024-11-20 17:51:26.735913] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:30:45.147 [2024-11-20 17:51:26.736016] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:30:45.147 [2024-11-20 17:51:26.739149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.147 17:51:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:30:45.147 [2024-11-20 17:51:26.743588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.147 [2024-11-20 17:51:26.743634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-20 17:51:26.743649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.147 [2024-11-20 17:51:26.743659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-20 17:51:26.743669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.147 [2024-11-20 17:51:26.743678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-20 17:51:26.743688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.148 [2024-11-20 17:51:26.743697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.148 [2024-11-20 17:51:26.743707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.749177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.148 [2024-11-20 17:51:26.749311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.749345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.148 [2024-11-20 17:51:26.749357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.749377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.749392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.749401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.749412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.148 [2024-11-20 17:51:26.749429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.753542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.759254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.148 [2024-11-20 17:51:26.759362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.759385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.148 [2024-11-20 17:51:26.759397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.759413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.759427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.759435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.759445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.148 [2024-11-20 17:51:26.759461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.763558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.148 [2024-11-20 17:51:26.763673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.763693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.148 [2024-11-20 17:51:26.763704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.763718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.763732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.763740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.763749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.148 [2024-11-20 17:51:26.763763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.769308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.148 [2024-11-20 17:51:26.769407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.769427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.148 [2024-11-20 17:51:26.769438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.769453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.769466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.769475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.769484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.148 [2024-11-20 17:51:26.769498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.773623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.148 [2024-11-20 17:51:26.773718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.773738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.148 [2024-11-20 17:51:26.773749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.773765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.773778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.773787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.773796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.148 [2024-11-20 17:51:26.773810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.779371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.148 [2024-11-20 17:51:26.779475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.779494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.148 [2024-11-20 17:51:26.779504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.779520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.779533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.779541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.779549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.148 [2024-11-20 17:51:26.779563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.783688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.148 [2024-11-20 17:51:26.783800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.783820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.148 [2024-11-20 17:51:26.783830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.783845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.783859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.783867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.783876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.148 [2024-11-20 17:51:26.783890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.789460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.148 [2024-11-20 17:51:26.789566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.789585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.148 [2024-11-20 17:51:26.789610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.789625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.789638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.789646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.789655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.148 [2024-11-20 17:51:26.789669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.793756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.148 [2024-11-20 17:51:26.793850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.793869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.148 [2024-11-20 17:51:26.793880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.793895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.148 [2024-11-20 17:51:26.793909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.148 [2024-11-20 17:51:26.793918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.148 [2024-11-20 17:51:26.793927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.148 [2024-11-20 17:51:26.793941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.148 [2024-11-20 17:51:26.799540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.148 [2024-11-20 17:51:26.799639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-11-20 17:51:26.799659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.148 [2024-11-20 17:51:26.799669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.148 [2024-11-20 17:51:26.799684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.799697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.799705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.799714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.149 [2024-11-20 17:51:26.799728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.803822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.149 [2024-11-20 17:51:26.803924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.803944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.149 [2024-11-20 17:51:26.803956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.803972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.804004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.804015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.804025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.149 [2024-11-20 17:51:26.804040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.809624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.149 [2024-11-20 17:51:26.809730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.809751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.149 [2024-11-20 17:51:26.809762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.809777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.809791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.809800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.809809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.149 [2024-11-20 17:51:26.809828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.813888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.149 [2024-11-20 17:51:26.813973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.813993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.149 [2024-11-20 17:51:26.814005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.814020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.814106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.814120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.814130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.149 [2024-11-20 17:51:26.814147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.819693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.149 [2024-11-20 17:51:26.819784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.819804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.149 [2024-11-20 17:51:26.819814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.819829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.819843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.819851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.819860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.149 [2024-11-20 17:51:26.819875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.823940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.149 [2024-11-20 17:51:26.824034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.824055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.149 [2024-11-20 17:51:26.824079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.824097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.824194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.824210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.824220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.149 [2024-11-20 17:51:26.824236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.829754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.149 [2024-11-20 17:51:26.829837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.829858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.149 [2024-11-20 17:51:26.829877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.829894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.829908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.829927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.829936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.149 [2024-11-20 17:51:26.829951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.833999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.149 [2024-11-20 17:51:26.834088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.834109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.149 [2024-11-20 17:51:26.834120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.834146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.834187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.834198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.834207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.149 [2024-11-20 17:51:26.834222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.839806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.149 [2024-11-20 17:51:26.839892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.839912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.149 [2024-11-20 17:51:26.839922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.839937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.839951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.839959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.839967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.149 [2024-11-20 17:51:26.839981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.844048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.149 [2024-11-20 17:51:26.844153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.844173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.149 [2024-11-20 17:51:26.844184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.844199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.149 [2024-11-20 17:51:26.844232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.149 [2024-11-20 17:51:26.844242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.149 [2024-11-20 17:51:26.844251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.149 [2024-11-20 17:51:26.844266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.149 [2024-11-20 17:51:26.849854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.149 [2024-11-20 17:51:26.849930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-11-20 17:51:26.849958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.149 [2024-11-20 17:51:26.849968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.149 [2024-11-20 17:51:26.849984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.150 [2024-11-20 17:51:26.849997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.150 [2024-11-20 17:51:26.850006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.150 [2024-11-20 17:51:26.850014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.150 [2024-11-20 17:51:26.850029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.150 [2024-11-20 17:51:26.854115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.150 [2024-11-20 17:51:26.854195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-11-20 17:51:26.854215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.150 [2024-11-20 17:51:26.854226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.150 [2024-11-20 17:51:26.854241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.150 [2024-11-20 17:51:26.854272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.150 [2024-11-20 17:51:26.854283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.150 [2024-11-20 17:51:26.854292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.150 [2024-11-20 17:51:26.854307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.150 [2024-11-20 17:51:26.859901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:45.150 [2024-11-20 17:51:26.859992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-11-20 17:51:26.860011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200fef0 with addr=10.0.0.3, port=4420 00:30:45.150 [2024-11-20 17:51:26.860021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200fef0 is same with the state(6) to be set 00:30:45.150 [2024-11-20 17:51:26.860037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200fef0 (9): Bad file descriptor 00:30:45.150 [2024-11-20 17:51:26.860050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.150 [2024-11-20 17:51:26.860058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:45.150 [2024-11-20 17:51:26.860115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.150 [2024-11-20 17:51:26.860132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.150 [2024-11-20 17:51:26.864163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:30:45.150 [2024-11-20 17:51:26.864238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-11-20 17:51:26.864257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201dcf0 with addr=10.0.0.4, port=4420 00:30:45.150 [2024-11-20 17:51:26.864267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201dcf0 is same with the state(6) to be set 00:30:45.150 [2024-11-20 17:51:26.864282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201dcf0 (9): Bad file descriptor 00:30:45.150 [2024-11-20 17:51:26.864315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:30:45.150 [2024-11-20 17:51:26.864326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:30:45.150 [2024-11-20 17:51:26.864335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:30:45.150 [2024-11-20 17:51:26.864350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.150 [2024-11-20 17:51:26.866317] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:30:45.150 [2024-11-20 17:51:26.866350] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:45.150 [2024-11-20 17:51:26.866383] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:45.150 [2024-11-20 17:51:26.867346] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:30:45.150 [2024-11-20 17:51:26.867376] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:45.150 [2024-11-20 17:51:26.867394] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:45.418 [2024-11-20 17:51:26.952451] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:45.418 [2024-11-20 17:51:26.953423] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:45.986 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.244 17:51:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.502 [2024-11-20 17:51:28.054815] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.502 17:51:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:30:47.437 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:47.438 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.696 [2024-11-20 17:51:29.326833] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:30:47.696 2024/11/20 17:51:29 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:30:47.696 request: 00:30:47.696 { 00:30:47.696 "method": "bdev_nvme_start_mdns_discovery", 00:30:47.696 "params": { 00:30:47.696 "name": "mdns", 00:30:47.696 "svcname": "_nvme-disc._http", 00:30:47.696 "hostnqn": "nqn.2021-12.io.spdk:test" 00:30:47.696 } 00:30:47.696 } 00:30:47.696 Got JSON-RPC error response 00:30:47.696 GoRPCClient: error on JSON-RPC call 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:47.696 17:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:30:48.262 [2024-11-20 17:51:29.915528] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:30:48.262 [2024-11-20 17:51:30.015541] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:30:48.520 [2024-11-20 17:51:30.115543] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:48.520 [2024-11-20 17:51:30.115585] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:48.520 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:48.520 cookie is 0 00:30:48.520 is_local: 1 00:30:48.520 our_own: 0 00:30:48.520 wide_area: 0 00:30:48.520 multicast: 1 00:30:48.520 cached: 1 00:30:48.520 [2024-11-20 17:51:30.215543] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:48.520 [2024-11-20 17:51:30.215594] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:30:48.520 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:48.520 cookie is 0 00:30:48.520 is_local: 1 00:30:48.520 our_own: 0 00:30:48.520 wide_area: 0 00:30:48.520 multicast: 1 00:30:48.520 cached: 1 00:30:48.520 [2024-11-20 17:51:30.215624] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:30:48.778 [2024-11-20 17:51:30.315561] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:30:48.778 [2024-11-20 17:51:30.315604] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:48.778 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:48.778 cookie is 0 00:30:48.778 is_local: 1 00:30:48.778 our_own: 0 00:30:48.778 wide_area: 0 00:30:48.778 multicast: 1 00:30:48.778 cached: 1 00:30:48.778 [2024-11-20 17:51:30.415546] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:30:48.778 [2024-11-20 17:51:30.415600] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:30:48.778 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:48.778 cookie is 0 00:30:48.778 is_local: 1 00:30:48.778 our_own: 0 00:30:48.778 wide_area: 0 00:30:48.778 multicast: 1 00:30:48.778 cached: 1 00:30:48.778 [2024-11-20 17:51:30.415629] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:30:49.345 [2024-11-20 17:51:31.120285] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:30:49.345 [2024-11-20 17:51:31.120328] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:30:49.345 [2024-11-20 17:51:31.120348] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:30:49.604 [2024-11-20 17:51:31.206428] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:30:49.604 [2024-11-20 17:51:31.267408] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:30:49.604 [2024-11-20 17:51:31.267463] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:30:49.604 [2024-11-20 17:51:31.319941] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:30:49.604 [2024-11-20 17:51:31.319972] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:30:49.604 [2024-11-20 17:51:31.320007] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:30:49.862 [2024-11-20 17:51:31.406157] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:30:49.862 [2024-11-20 17:51:31.467049] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:30:49.862 [2024-11-20 17:51:31.467107] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.234 [2024-11-20 17:51:34.518045] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:30:53.234 2024/11/20 17:51:34 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:30:53.234 request: 00:30:53.234 { 00:30:53.234 "method": "bdev_nvme_start_mdns_discovery", 00:30:53.234 "params": { 00:30:53.234 "name": "cdc", 00:30:53.234 "svcname": "_nvme-disc._tcp", 00:30:53.234 "hostnqn": "nqn.2021-12.io.spdk:test" 00:30:53.234 } 00:30:53.234 } 00:30:53.234 Got JSON-RPC error response 00:30:53.234 GoRPCClient: error on JSON-RPC call 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.234 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:53.235 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:53.235 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:30:53.235 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:53.235 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:53.235 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:53.235 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:53.235 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.235 17:51:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:30:53.235 [2024-11-20 17:51:34.715531] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:30:54.168 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:54.169 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:30:54.169 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:30:54.169 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 116050 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 116050 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 116067 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:54.169 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:30:54.169 Got SIGTERM, quitting. 00:30:54.169 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:30:54.169 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:30:54.169 avahi-daemon 0.8 exiting. 00:30:54.428 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.428 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:30:54.428 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.428 17:51:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.428 rmmod nvme_tcp 00:30:54.428 rmmod nvme_fabrics 00:30:54.428 rmmod nvme_keyring 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@513 -- # '[' -n 116000 ']' 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@514 -- # killprocess 116000 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # '[' -z 116000 ']' 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # kill -0 116000 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # uname 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116000 00:30:54.428 killing process with pid 116000 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116000' 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@969 -- # kill 116000 00:30:54.428 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@974 -- # wait 116000 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@787 -- # iptables-save 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:54.688 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:30:54.954 00:30:54.954 real 0m22.802s 00:30:54.954 user 0m43.884s 00:30:54.954 sys 0m2.301s 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:54.954 ************************************ 00:30:54.954 END TEST nvmf_mdns_discovery 00:30:54.954 ************************************ 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.954 ************************************ 00:30:54.954 START TEST nvmf_host_multipath 00:30:54.954 ************************************ 00:30:54.954 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:30:55.231 * Looking for test storage... 00:30:55.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.231 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.232 --rc genhtml_branch_coverage=1 00:30:55.232 --rc genhtml_function_coverage=1 00:30:55.232 --rc genhtml_legend=1 00:30:55.232 --rc geninfo_all_blocks=1 00:30:55.232 --rc geninfo_unexecuted_blocks=1 00:30:55.232 00:30:55.232 ' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.232 --rc genhtml_branch_coverage=1 00:30:55.232 --rc genhtml_function_coverage=1 00:30:55.232 --rc genhtml_legend=1 00:30:55.232 --rc geninfo_all_blocks=1 00:30:55.232 --rc geninfo_unexecuted_blocks=1 00:30:55.232 00:30:55.232 ' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.232 --rc genhtml_branch_coverage=1 00:30:55.232 --rc genhtml_function_coverage=1 00:30:55.232 --rc genhtml_legend=1 00:30:55.232 --rc geninfo_all_blocks=1 00:30:55.232 --rc geninfo_unexecuted_blocks=1 00:30:55.232 00:30:55.232 ' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.232 --rc genhtml_branch_coverage=1 00:30:55.232 --rc genhtml_function_coverage=1 00:30:55.232 --rc genhtml_legend=1 00:30:55.232 --rc geninfo_all_blocks=1 00:30:55.232 --rc geninfo_unexecuted_blocks=1 00:30:55.232 00:30:55.232 ' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.232 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:55.232 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:55.233 Cannot find device "nvmf_init_br" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:55.233 Cannot find device "nvmf_init_br2" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:55.233 Cannot find device "nvmf_tgt_br" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:55.233 Cannot find device "nvmf_tgt_br2" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:55.233 Cannot find device "nvmf_init_br" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:55.233 Cannot find device "nvmf_init_br2" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:55.233 Cannot find device "nvmf_tgt_br" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:55.233 Cannot find device "nvmf_tgt_br2" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:55.233 Cannot find device "nvmf_br" 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:30:55.233 17:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:55.233 Cannot find device "nvmf_init_if" 00:30:55.233 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:30:55.233 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:55.492 Cannot find device "nvmf_init_if2" 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:55.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:55.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:55.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:55.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:30:55.492 00:30:55.492 --- 10.0.0.3 ping statistics --- 00:30:55.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.492 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:55.492 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:55.492 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:30:55.492 00:30:55.492 --- 10.0.0.4 ping statistics --- 00:30:55.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.492 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:55.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:30:55.492 00:30:55.492 --- 10.0.0.1 ping statistics --- 00:30:55.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.492 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:55.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:30:55.492 00:30:55.492 --- 10.0.0.2 ping statistics --- 00:30:55.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.492 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:55.492 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=116703 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 116703 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 116703 ']' 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.751 17:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:55.751 [2024-11-20 17:51:37.400472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:55.751 [2024-11-20 17:51:37.400620] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.009 [2024-11-20 17:51:37.545926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:56.009 [2024-11-20 17:51:37.651624] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.009 [2024-11-20 17:51:37.651687] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.009 [2024-11-20 17:51:37.651699] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.009 [2024-11-20 17:51:37.651707] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.009 [2024-11-20 17:51:37.651715] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.009 [2024-11-20 17:51:37.651869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.009 [2024-11-20 17:51:37.651877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=116703 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:56.945 [2024-11-20 17:51:38.674503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.945 17:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:57.512 Malloc0 00:30:57.512 17:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:57.770 17:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.028 17:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:58.286 [2024-11-20 17:51:39.871457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:58.286 17:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:30:58.544 [2024-11-20 17:51:40.127788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=116807 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 116807 /var/tmp/bdevperf.sock 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 116807 ']' 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:58.544 17:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:59.928 17:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:59.929 17:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:30:59.929 17:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:59.929 17:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:00.496 Nvme0n1 00:31:00.496 17:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:00.754 Nvme0n1 00:31:00.754 17:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:31:00.754 17:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:02.128 17:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:31:02.128 17:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:31:02.128 17:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:31:02.387 17:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:31:02.387 17:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=116900 00:31:02.387 17:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:02.387 17:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:09.066 Attaching 4 probes... 00:31:09.066 @path[10.0.0.3, 4421]: 14310 00:31:09.066 @path[10.0.0.3, 4421]: 14438 00:31:09.066 @path[10.0.0.3, 4421]: 14861 00:31:09.066 @path[10.0.0.3, 4421]: 14426 00:31:09.066 @path[10.0.0.3, 4421]: 14692 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 116900 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:31:09.066 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:31:09.325 17:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:31:09.584 17:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:31:09.584 17:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117032 00:31:09.584 17:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:09.584 17:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:16.140 Attaching 4 probes... 00:31:16.140 @path[10.0.0.3, 4420]: 14548 00:31:16.140 @path[10.0.0.3, 4420]: 15483 00:31:16.140 @path[10.0.0.3, 4420]: 15671 00:31:16.140 @path[10.0.0.3, 4420]: 16168 00:31:16.140 @path[10.0.0.3, 4420]: 16308 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117032 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:31:16.140 17:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:31:16.705 17:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:31:16.705 17:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117163 00:31:16.705 17:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:16.705 17:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:23.265 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:23.265 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:23.265 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:23.265 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:23.265 Attaching 4 probes... 00:31:23.266 @path[10.0.0.3, 4421]: 12683 00:31:23.266 @path[10.0.0.3, 4421]: 15313 00:31:23.266 @path[10.0.0.3, 4421]: 15391 00:31:23.266 @path[10.0.0.3, 4421]: 15434 00:31:23.266 @path[10.0.0.3, 4421]: 15355 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117163 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:31:23.266 17:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:31:23.525 17:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:31:23.525 17:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117288 00:31:23.525 17:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:23.525 17:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:30.139 Attaching 4 probes... 00:31:30.139 00:31:30.139 00:31:30.139 00:31:30.139 00:31:30.139 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117288 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:31:30.139 17:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:31:30.399 17:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:31:30.399 17:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117419 00:31:30.399 17:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:30.399 17:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:36.957 Attaching 4 probes... 00:31:36.957 @path[10.0.0.3, 4421]: 15159 00:31:36.957 @path[10.0.0.3, 4421]: 15417 00:31:36.957 @path[10.0.0.3, 4421]: 15556 00:31:36.957 @path[10.0.0.3, 4421]: 15886 00:31:36.957 @path[10.0.0.3, 4421]: 15912 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117419 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:36.957 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:31:36.957 [2024-11-20 17:52:18.670327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.957 [2024-11-20 17:52:18.670605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.958 [2024-11-20 17:52:18.670614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.958 [2024-11-20 17:52:18.670624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.958 [2024-11-20 17:52:18.670634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d09690 is same with the state(6) to be set 00:31:36.958 17:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:31:38.354 17:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:31:38.354 17:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117555 00:31:38.354 17:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:38.354 17:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:44.909 Attaching 4 probes... 00:31:44.909 @path[10.0.0.3, 4420]: 15088 00:31:44.909 @path[10.0.0.3, 4420]: 14390 00:31:44.909 @path[10.0.0.3, 4420]: 14859 00:31:44.909 @path[10.0.0.3, 4420]: 14546 00:31:44.909 @path[10.0.0.3, 4420]: 15697 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117555 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:44.909 17:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:31:44.909 [2024-11-20 17:52:26.255653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:31:44.909 17:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:31:44.909 17:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:31:51.469 17:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:31:51.469 17:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117737 00:31:51.469 17:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:51.469 17:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:58.078 Attaching 4 probes... 00:31:58.078 @path[10.0.0.3, 4421]: 13170 00:31:58.078 @path[10.0.0.3, 4421]: 15287 00:31:58.078 @path[10.0.0.3, 4421]: 15133 00:31:58.078 @path[10.0.0.3, 4421]: 15057 00:31:58.078 @path[10.0.0.3, 4421]: 15626 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117737 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 116807 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 116807 ']' 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 116807 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116807 00:31:58.078 killing process with pid 116807 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116807' 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 116807 00:31:58.078 17:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 116807 00:31:58.078 { 00:31:58.078 "results": [ 00:31:58.078 { 00:31:58.078 "job": "Nvme0n1", 00:31:58.078 "core_mask": "0x4", 00:31:58.078 "workload": "verify", 00:31:58.078 "status": "terminated", 00:31:58.078 "verify_range": { 00:31:58.078 "start": 0, 00:31:58.078 "length": 16384 00:31:58.078 }, 00:31:58.078 "queue_depth": 128, 00:31:58.078 "io_size": 4096, 00:31:58.078 "runtime": 56.311893, 00:31:58.078 "iops": 6557.229393797861, 00:31:58.078 "mibps": 25.614177319522895, 00:31:58.078 "io_failed": 0, 00:31:58.078 "io_timeout": 0, 00:31:58.078 "avg_latency_us": 19487.180719955686, 00:31:58.078 "min_latency_us": 452.4218181818182, 00:31:58.078 "max_latency_us": 7046430.72 00:31:58.078 } 00:31:58.078 ], 00:31:58.079 "core_count": 1 00:31:58.079 } 00:31:58.079 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 116807 00:31:58.079 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:58.079 [2024-11-20 17:51:40.215111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:58.079 [2024-11-20 17:51:40.215244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116807 ] 00:31:58.079 [2024-11-20 17:51:40.352938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.079 [2024-11-20 17:51:40.460015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:58.079 [2024-11-20 17:51:42.407890] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:31:58.079 Running I/O for 90 seconds... 00:31:58.079 8203.00 IOPS, 32.04 MiB/s [2024-11-20T17:52:39.867Z] 8298.50 IOPS, 32.42 MiB/s [2024-11-20T17:52:39.867Z] 7999.33 IOPS, 31.25 MiB/s [2024-11-20T17:52:39.867Z] 7820.50 IOPS, 30.55 MiB/s [2024-11-20T17:52:39.867Z] 7727.80 IOPS, 30.19 MiB/s [2024-11-20T17:52:39.867Z] 7652.00 IOPS, 29.89 MiB/s [2024-11-20T17:52:39.867Z] 7599.14 IOPS, 29.68 MiB/s [2024-11-20T17:52:39.867Z] 7580.50 IOPS, 29.61 MiB/s [2024-11-20T17:52:39.867Z] [2024-11-20 17:51:51.214430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.079 [2024-11-20 17:51:51.214562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.214924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.214985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.079 [2024-11-20 17:51:51.215839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.079 [2024-11-20 17:51:51.215852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.215872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.215886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.215905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.215919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.215939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.215959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.216960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.216992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.217701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.217715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.218980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.219008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.219031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.219047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.219095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:58.080 [2024-11-20 17:51:51.219132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.080 [2024-11-20 17:51:51.219146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.219975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.219990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.220012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.220027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.220048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.220062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.220083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.220098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.220119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.220134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.220173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.220199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.220222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.220239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:51.221305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:51.221350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:58.081 7583.78 IOPS, 29.62 MiB/s [2024-11-20T17:52:39.869Z] 7554.80 IOPS, 29.51 MiB/s [2024-11-20T17:52:39.869Z] 7575.00 IOPS, 29.59 MiB/s [2024-11-20T17:52:39.869Z] 7592.50 IOPS, 29.66 MiB/s [2024-11-20T17:52:39.869Z] 7634.23 IOPS, 29.82 MiB/s [2024-11-20T17:52:39.869Z] 7667.36 IOPS, 29.95 MiB/s [2024-11-20T17:52:39.869Z] 7688.20 IOPS, 30.03 MiB/s [2024-11-20T17:52:39.869Z] [2024-11-20 17:51:57.889821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.081 [2024-11-20 17:51:57.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.889996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:57.890014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.890038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:57.890054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.890098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:57.890116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.890138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:57.890153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.890174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:57.890189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.890210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:57.890226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.890248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.081 [2024-11-20 17:51:57.890263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:58.081 [2024-11-20 17:51:57.890284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.890321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.890402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.890453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.890489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.890527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.890565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.890602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.890617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.891893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.891918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.891941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.891957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.891977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.891991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.892028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.892042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.892063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.892078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.892098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.892124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.892161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.892180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.892201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.892216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.894972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.894994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.082 [2024-11-20 17:51:57.895260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.082 [2024-11-20 17:51:57.895515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:58.082 [2024-11-20 17:51:57.895536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.895551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.895573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.895589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.896978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.896993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.083 [2024-11-20 17:51:57.897505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:58.083 [2024-11-20 17:51:57.897525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.084 [2024-11-20 17:51:57.897949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.897969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.897983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.898793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.898808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.899707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.899735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.899763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.899794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.899816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.899832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.899852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.899866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:58.084 [2024-11-20 17:51:57.899887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.084 [2024-11-20 17:51:57.899901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.899937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.899952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.899973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.899988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.900974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.900995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.085 [2024-11-20 17:51:57.901491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:58.085 [2024-11-20 17:51:57.901512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.901968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.901983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.086 [2024-11-20 17:51:57.902256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.902471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.902485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.903969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:58.086 [2024-11-20 17:51:57.903989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.086 [2024-11-20 17:51:57.904007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.904970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.904992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.087 [2024-11-20 17:51:57.905007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.905058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.905094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.905116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.905131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.905151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.905166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.905186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.905218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.905243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.087 [2024-11-20 17:51:57.905258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:58.087 [2024-11-20 17:51:57.905279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.905293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.905314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.905329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.905350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.905365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.905400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.905415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.905435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.915765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.915829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.915850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.915873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.915889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.915911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.915927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.915964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.915981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.916003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.916018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.916040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.916055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.916095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.916112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.916133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.916148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.916170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.916184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.916206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.916223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.916247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.916263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.917976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.088 [2024-11-20 17:51:57.917997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.088 [2024-11-20 17:51:57.918012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.918979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.918996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:58.089 [2024-11-20 17:51:57.919489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.089 [2024-11-20 17:51:57.919504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.919527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.090 [2024-11-20 17:51:57.919557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.919578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.919593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.919614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.919629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.919650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.919664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.919685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.919700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.919722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.919738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.920975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.920997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.090 [2024-11-20 17:51:57.921855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:58.090 [2024-11-20 17:51:57.921877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.921893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.921914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.921930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.921951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.921967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.921988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.091 [2024-11-20 17:51:57.922316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.922973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.922988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.923011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.923026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.923048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.923063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.923099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.923127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.923959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.923989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.924032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.924050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.924071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.924087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.924128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.924145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.924167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.924181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.924203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.924218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.924239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.091 [2024-11-20 17:51:57.924255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:58.091 [2024-11-20 17:51:57.924275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.924971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.924987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:58.092 [2024-11-20 17:51:57.925597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.092 [2024-11-20 17:51:57.925612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.925964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.925995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.093 [2024-11-20 17:51:57.926414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.926554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.926569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.927969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.927990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.928004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.928042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.928057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:58.093 [2024-11-20 17:51:57.928079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.093 [2024-11-20 17:51:57.928094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.928970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.928984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.094 [2024-11-20 17:51:57.929268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.094 [2024-11-20 17:51:57.929643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.094 [2024-11-20 17:51:57.929658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.929970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.929985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.930006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.930021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.930825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.930852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.930906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.930926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.930948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.930963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.930985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.931974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.931987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.932007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.932037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.932057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.095 [2024-11-20 17:51:57.932070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:58.095 [2024-11-20 17:51:57.932090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.932963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.932984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.933004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.933041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.933055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.933075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.933089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.096 [2024-11-20 17:51:57.941707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.941837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.941857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.942327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.942367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:58.096 [2024-11-20 17:51:57.942434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.096 [2024-11-20 17:51:57.942463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.942958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.942979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.943978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.943998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.097 [2024-11-20 17:51:57.944563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:58.097 [2024-11-20 17:51:57.944598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.944619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.944655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.944675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.944710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.944730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.944781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.944830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.944868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.944890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.944925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.944957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.944994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:51:57.945149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.945968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.945988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.946023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.946044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.946095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.946120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.946155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.946188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.946226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.946247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:51:57.946515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.098 [2024-11-20 17:51:57.946546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:58.098 7340.94 IOPS, 28.68 MiB/s [2024-11-20T17:52:39.886Z] 7218.47 IOPS, 28.20 MiB/s [2024-11-20T17:52:39.886Z] 7241.22 IOPS, 28.29 MiB/s [2024-11-20T17:52:39.886Z] 7264.32 IOPS, 28.38 MiB/s [2024-11-20T17:52:39.886Z] 7287.25 IOPS, 28.47 MiB/s [2024-11-20T17:52:39.886Z] 7308.10 IOPS, 28.55 MiB/s [2024-11-20T17:52:39.886Z] 7322.64 IOPS, 28.60 MiB/s [2024-11-20T17:52:39.886Z] [2024-11-20 17:52:05.092208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.092311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.092411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.092433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.092455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.092470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.092490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.092503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.092523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.092536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.092556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.092570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.092591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.092604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.092988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.093015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.093040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.093055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.098 [2024-11-20 17:52:05.093092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.098 [2024-11-20 17:52:05.093109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.093981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.093994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.099 [2024-11-20 17:52:05.094460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:58.099 [2024-11-20 17:52:05.094480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.094961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.094990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.095014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.095028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.095049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.095097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.100 [2024-11-20 17:52:05.096182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.096961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.096986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.097000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.097025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.097039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.097063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.100 [2024-11-20 17:52:05.097089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:58.100 [2024-11-20 17:52:05.097117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:05.097634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:05.097655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:58.101 7165.65 IOPS, 27.99 MiB/s [2024-11-20T17:52:39.889Z] 6867.08 IOPS, 26.82 MiB/s [2024-11-20T17:52:39.889Z] 6592.40 IOPS, 25.75 MiB/s [2024-11-20T17:52:39.889Z] 6338.85 IOPS, 24.76 MiB/s [2024-11-20T17:52:39.889Z] 6104.07 IOPS, 23.84 MiB/s [2024-11-20T17:52:39.889Z] 5886.07 IOPS, 22.99 MiB/s [2024-11-20T17:52:39.889Z] 5683.10 IOPS, 22.20 MiB/s [2024-11-20T17:52:39.889Z] 5629.20 IOPS, 21.99 MiB/s [2024-11-20T17:52:39.889Z] 5696.84 IOPS, 22.25 MiB/s [2024-11-20T17:52:39.889Z] 5760.84 IOPS, 22.50 MiB/s [2024-11-20T17:52:39.889Z] 5821.82 IOPS, 22.74 MiB/s [2024-11-20T17:52:39.889Z] 5885.82 IOPS, 22.99 MiB/s [2024-11-20T17:52:39.889Z] 5944.40 IOPS, 23.22 MiB/s [2024-11-20T17:52:39.889Z] 5990.94 IOPS, 23.40 MiB/s [2024-11-20T17:52:39.889Z] [2024-11-20 17:52:18.670314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:18.670410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:18.670511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:18.670566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:18.670605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:18.670643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:18.670680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.101 [2024-11-20 17:52:18.670719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.670755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.670830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.670922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.670958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.670978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.670993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.671014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.671028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.671048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.671076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.671101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.671116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.671138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.101 [2024-11-20 17:52:18.671152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:58.101 [2024-11-20 17:52:18.671177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.102 [2024-11-20 17:52:18.671193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.102 [2024-11-20 17:52:18.671245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.102 [2024-11-20 17:52:18.671280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.102 [2024-11-20 17:52:18.671336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.102 [2024-11-20 17:52:18.671392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.671466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.671840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.671876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.671906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.671935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.671971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.671986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.102 [2024-11-20 17:52:18.672792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.102 [2024-11-20 17:52:18.672806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.672835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.672851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.672865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.672881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.672894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.672910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.672925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.672941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.672963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.672985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.103 [2024-11-20 17:52:18.673453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.673976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.673992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.674005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.674021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.674034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.674050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.674064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.103 [2024-11-20 17:52:18.674089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.103 [2024-11-20 17:52:18.674106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.674980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.674995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.104 [2024-11-20 17:52:18.675147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.104 [2024-11-20 17:52:18.675405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675741] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x60f1f0 was disconnected and freed. reset controller. 00:31:58.104 [2024-11-20 17:52:18.675864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.104 [2024-11-20 17:52:18.675889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.104 [2024-11-20 17:52:18.675906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.104 [2024-11-20 17:52:18.675920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.105 [2024-11-20 17:52:18.675935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.105 [2024-11-20 17:52:18.675949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.105 [2024-11-20 17:52:18.675964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.105 [2024-11-20 17:52:18.675977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.105 [2024-11-20 17:52:18.675993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.105 [2024-11-20 17:52:18.676008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.105 [2024-11-20 17:52:18.676029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x605450 is same with the state(6) to be set 00:31:58.105 [2024-11-20 17:52:18.677950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:58.105 [2024-11-20 17:52:18.677996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x605450 (9): Bad file descriptor 00:31:58.105 [2024-11-20 17:52:18.678438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.105 [2024-11-20 17:52:18.678471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x605450 with addr=10.0.0.3, port=4421 00:31:58.105 [2024-11-20 17:52:18.678489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x605450 is same with the state(6) to be set 00:31:58.105 [2024-11-20 17:52:18.678528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x605450 (9): Bad file descriptor 00:31:58.105 [2024-11-20 17:52:18.678553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:58.105 [2024-11-20 17:52:18.678569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:58.105 [2024-11-20 17:52:18.678584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:58.105 [2024-11-20 17:52:18.678610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:58.105 [2024-11-20 17:52:18.678626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:58.105 6041.57 IOPS, 23.60 MiB/s [2024-11-20T17:52:39.893Z] 6081.21 IOPS, 23.75 MiB/s [2024-11-20T17:52:39.893Z] 6109.92 IOPS, 23.87 MiB/s [2024-11-20T17:52:39.893Z] 6146.02 IOPS, 24.01 MiB/s [2024-11-20T17:52:39.893Z] 6176.56 IOPS, 24.13 MiB/s [2024-11-20T17:52:39.893Z] 6209.93 IOPS, 24.26 MiB/s [2024-11-20T17:52:39.893Z] 6248.56 IOPS, 24.41 MiB/s [2024-11-20T17:52:39.893Z] 6281.95 IOPS, 24.54 MiB/s [2024-11-20T17:52:39.893Z] 6313.87 IOPS, 24.66 MiB/s [2024-11-20T17:52:39.893Z] 6345.20 IOPS, 24.79 MiB/s [2024-11-20T17:52:39.893Z] [2024-11-20 17:52:28.789932] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:58.105 6374.85 IOPS, 24.90 MiB/s [2024-11-20T17:52:39.893Z] 6403.23 IOPS, 25.01 MiB/s [2024-11-20T17:52:39.893Z] 6428.53 IOPS, 25.11 MiB/s [2024-11-20T17:52:39.893Z] 6442.22 IOPS, 25.16 MiB/s [2024-11-20T17:52:39.893Z] 6443.98 IOPS, 25.17 MiB/s [2024-11-20T17:52:39.893Z] 6466.33 IOPS, 25.26 MiB/s [2024-11-20T17:52:39.893Z] 6486.38 IOPS, 25.34 MiB/s [2024-11-20T17:52:39.893Z] 6506.22 IOPS, 25.41 MiB/s [2024-11-20T17:52:39.893Z] 6525.47 IOPS, 25.49 MiB/s [2024-11-20T17:52:39.893Z] 6552.68 IOPS, 25.60 MiB/s [2024-11-20T17:52:39.893Z] Received shutdown signal, test time was about 56.312728 seconds 00:31:58.105 00:31:58.105 Latency(us) 00:31:58.105 [2024-11-20T17:52:39.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.105 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:58.105 Verification LBA range: start 0x0 length 0x4000 00:31:58.105 Nvme0n1 : 56.31 6557.23 25.61 0.00 0.00 19487.18 452.42 7046430.72 00:31:58.105 [2024-11-20T17:52:39.893Z] =================================================================================================================== 00:31:58.105 [2024-11-20T17:52:39.893Z] Total : 6557.23 25.61 0.00 0.00 19487.18 452.42 7046430.72 00:31:58.105 [2024-11-20 17:52:38.938803] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.105 rmmod nvme_tcp 00:31:58.105 rmmod nvme_fabrics 00:31:58.105 rmmod nvme_keyring 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 116703 ']' 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 116703 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 116703 ']' 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 116703 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116703 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:58.105 killing process with pid 116703 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116703' 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 116703 00:31:58.105 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 116703 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:58.364 17:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:58.364 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:31:58.623 00:31:58.623 real 1m3.495s 00:31:58.623 user 3m0.714s 00:31:58.623 sys 0m13.740s 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:58.623 ************************************ 00:31:58.623 END TEST nvmf_host_multipath 00:31:58.623 ************************************ 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.623 ************************************ 00:31:58.623 START TEST nvmf_timeout 00:31:58.623 ************************************ 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:31:58.623 * Looking for test storage... 00:31:58.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:31:58.623 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.624 --rc genhtml_branch_coverage=1 00:31:58.624 --rc genhtml_function_coverage=1 00:31:58.624 --rc genhtml_legend=1 00:31:58.624 --rc geninfo_all_blocks=1 00:31:58.624 --rc geninfo_unexecuted_blocks=1 00:31:58.624 00:31:58.624 ' 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.624 --rc genhtml_branch_coverage=1 00:31:58.624 --rc genhtml_function_coverage=1 00:31:58.624 --rc genhtml_legend=1 00:31:58.624 --rc geninfo_all_blocks=1 00:31:58.624 --rc geninfo_unexecuted_blocks=1 00:31:58.624 00:31:58.624 ' 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.624 --rc genhtml_branch_coverage=1 00:31:58.624 --rc genhtml_function_coverage=1 00:31:58.624 --rc genhtml_legend=1 00:31:58.624 --rc geninfo_all_blocks=1 00:31:58.624 --rc geninfo_unexecuted_blocks=1 00:31:58.624 00:31:58.624 ' 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.624 --rc genhtml_branch_coverage=1 00:31:58.624 --rc genhtml_function_coverage=1 00:31:58.624 --rc genhtml_legend=1 00:31:58.624 --rc geninfo_all_blocks=1 00:31:58.624 --rc geninfo_unexecuted_blocks=1 00:31:58.624 00:31:58.624 ' 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:31:58.624 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:58.883 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:58.883 Cannot find device "nvmf_init_br" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:58.883 Cannot find device "nvmf_init_br2" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:58.883 Cannot find device "nvmf_tgt_br" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:58.883 Cannot find device "nvmf_tgt_br2" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:58.883 Cannot find device "nvmf_init_br" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:58.883 Cannot find device "nvmf_init_br2" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:58.883 Cannot find device "nvmf_tgt_br" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:58.883 Cannot find device "nvmf_tgt_br2" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:58.883 Cannot find device "nvmf_br" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:58.883 Cannot find device "nvmf_init_if" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:58.883 Cannot find device "nvmf_init_if2" 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:58.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:58.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:58.883 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:59.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:59.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:31:59.142 00:31:59.142 --- 10.0.0.3 ping statistics --- 00:31:59.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.142 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:59.142 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:59.142 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:31:59.142 00:31:59.142 --- 10.0.0.4 ping statistics --- 00:31:59.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.142 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:59.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:59.142 00:31:59.142 --- 10.0.0.1 ping statistics --- 00:31:59.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.142 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:59.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:31:59.142 00:31:59.142 --- 10.0.0.2 ping statistics --- 00:31:59.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.142 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=118107 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 118107 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 118107 ']' 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:59.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.142 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.143 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:59.143 17:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:59.143 [2024-11-20 17:52:40.864706] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:59.143 [2024-11-20 17:52:40.864791] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.401 [2024-11-20 17:52:40.998056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:59.401 [2024-11-20 17:52:41.099825] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.401 [2024-11-20 17:52:41.099916] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.401 [2024-11-20 17:52:41.099930] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.401 [2024-11-20 17:52:41.099939] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.401 [2024-11-20 17:52:41.099946] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.401 [2024-11-20 17:52:41.100127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.401 [2024-11-20 17:52:41.100131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:59.740 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.005 [2024-11-20 17:52:41.584628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.005 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:00.264 Malloc0 00:32:00.264 17:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:00.522 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.087 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:01.346 [2024-11-20 17:52:42.876686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=118189 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 118189 /var/tmp/bdevperf.sock 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 118189 ']' 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:01.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:01.346 17:52:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:01.346 [2024-11-20 17:52:42.962585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:01.346 [2024-11-20 17:52:42.962694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118189 ] 00:32:01.346 [2024-11-20 17:52:43.102790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.604 [2024-11-20 17:52:43.204167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:02.538 17:52:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.538 17:52:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:32:02.538 17:52:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:02.538 17:52:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:32:02.796 NVMe0n1 00:32:02.796 17:52:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:02.796 17:52:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=118239 00:32:02.797 17:52:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:32:03.054 Running I/O for 10 seconds... 00:32:03.989 17:52:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:04.249 8576.00 IOPS, 33.50 MiB/s [2024-11-20T17:52:46.037Z] [2024-11-20 17:52:45.891325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.249 [2024-11-20 17:52:45.891408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.249 [2024-11-20 17:52:45.891693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.249 [2024-11-20 17:52:45.891704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.891985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.891997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.250 [2024-11-20 17:52:45.892544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.250 [2024-11-20 17:52:45.892553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.892982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.892992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.893013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.251 [2024-11-20 17:52:45.893034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.251 [2024-11-20 17:52:45.893393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.251 [2024-11-20 17:52:45.893405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.893988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.893997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.894017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.894038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.894059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.252 [2024-11-20 17:52:45.894091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.252 [2024-11-20 17:52:45.894112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.252 [2024-11-20 17:52:45.894134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.252 [2024-11-20 17:52:45.894154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.252 [2024-11-20 17:52:45.894175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.252 [2024-11-20 17:52:45.894196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.252 [2024-11-20 17:52:45.894217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.252 [2024-11-20 17:52:45.894252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.252 [2024-11-20 17:52:45.894269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8956e0 is same with the state(6) to be set 00:32:04.252 [2024-11-20 17:52:45.894283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.252 [2024-11-20 17:52:45.894292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.252 [2024-11-20 17:52:45.894300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83872 len:8 PRP1 0x0 PRP2 0x0 00:32:04.253 [2024-11-20 17:52:45.894309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.253 [2024-11-20 17:52:45.894383] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8956e0 was disconnected and freed. reset controller. 00:32:04.253 [2024-11-20 17:52:45.894476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.253 [2024-11-20 17:52:45.894492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.253 [2024-11-20 17:52:45.894504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.253 [2024-11-20 17:52:45.894514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.253 [2024-11-20 17:52:45.894524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.253 [2024-11-20 17:52:45.894534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.253 [2024-11-20 17:52:45.894545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.253 [2024-11-20 17:52:45.894554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.253 [2024-11-20 17:52:45.894563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81d1c0 is same with the state(6) to be set 00:32:04.253 [2024-11-20 17:52:45.894795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.253 [2024-11-20 17:52:45.894833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81d1c0 (9): Bad file descriptor 00:32:04.253 [2024-11-20 17:52:45.894964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:04.253 [2024-11-20 17:52:45.894996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81d1c0 with addr=10.0.0.3, port=4420 00:32:04.253 [2024-11-20 17:52:45.895010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81d1c0 is same with the state(6) to be set 00:32:04.253 [2024-11-20 17:52:45.895037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81d1c0 (9): Bad file descriptor 00:32:04.253 [2024-11-20 17:52:45.895062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:04.253 [2024-11-20 17:52:45.895088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:04.253 [2024-11-20 17:52:45.895101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:04.253 [2024-11-20 17:52:45.895123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.253 [2024-11-20 17:52:45.895135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:04.253 17:52:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:32:06.127 5202.50 IOPS, 20.32 MiB/s [2024-11-20T17:52:47.915Z] 3468.33 IOPS, 13.55 MiB/s [2024-11-20T17:52:47.915Z] [2024-11-20 17:52:47.895514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.127 [2024-11-20 17:52:47.895628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81d1c0 with addr=10.0.0.3, port=4420 00:32:06.127 [2024-11-20 17:52:47.895652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81d1c0 is same with the state(6) to be set 00:32:06.127 [2024-11-20 17:52:47.895685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81d1c0 (9): Bad file descriptor 00:32:06.127 [2024-11-20 17:52:47.895709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:06.127 [2024-11-20 17:52:47.895722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:06.127 [2024-11-20 17:52:47.895735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:06.127 [2024-11-20 17:52:47.895772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.127 [2024-11-20 17:52:47.895785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:06.385 17:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:32:06.385 17:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:06.385 17:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:32:06.643 17:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:32:06.643 17:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:32:06.643 17:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:32:06.643 17:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:32:06.902 17:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:32:06.902 17:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:32:08.280 2601.25 IOPS, 10.16 MiB/s [2024-11-20T17:52:50.068Z] 2081.00 IOPS, 8.13 MiB/s [2024-11-20T17:52:50.068Z] [2024-11-20 17:52:49.896126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.280 [2024-11-20 17:52:49.896253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81d1c0 with addr=10.0.0.3, port=4420 00:32:08.280 [2024-11-20 17:52:49.896274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81d1c0 is same with the state(6) to be set 00:32:08.280 [2024-11-20 17:52:49.896315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81d1c0 (9): Bad file descriptor 00:32:08.280 [2024-11-20 17:52:49.896339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:08.280 [2024-11-20 17:52:49.896351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:08.280 [2024-11-20 17:52:49.896366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:08.280 [2024-11-20 17:52:49.896412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:08.280 [2024-11-20 17:52:49.896427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.194 1734.17 IOPS, 6.77 MiB/s [2024-11-20T17:52:51.982Z] 1486.43 IOPS, 5.81 MiB/s [2024-11-20T17:52:51.982Z] [2024-11-20 17:52:51.896489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.194 [2024-11-20 17:52:51.896631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.194 [2024-11-20 17:52:51.896647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.194 [2024-11-20 17:52:51.896659] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:10.194 [2024-11-20 17:52:51.896718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.131 1300.62 IOPS, 5.08 MiB/s 00:32:11.131 Latency(us) 00:32:11.131 [2024-11-20T17:52:52.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.131 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:11.131 Verification LBA range: start 0x0 length 0x4000 00:32:11.131 NVMe0n1 : 8.22 1266.54 4.95 15.58 0.00 99650.43 2383.13 7015926.69 00:32:11.131 [2024-11-20T17:52:52.919Z] =================================================================================================================== 00:32:11.131 [2024-11-20T17:52:52.919Z] Total : 1266.54 4.95 15.58 0.00 99650.43 2383.13 7015926.69 00:32:11.131 { 00:32:11.131 "results": [ 00:32:11.131 { 00:32:11.131 "job": "NVMe0n1", 00:32:11.131 "core_mask": "0x4", 00:32:11.131 "workload": "verify", 00:32:11.131 "status": "finished", 00:32:11.131 "verify_range": { 00:32:11.131 "start": 0, 00:32:11.131 "length": 16384 00:32:11.131 }, 00:32:11.131 "queue_depth": 128, 00:32:11.131 "io_size": 4096, 00:32:11.131 "runtime": 8.215297, 00:32:11.131 "iops": 1266.5397246137297, 00:32:11.131 "mibps": 4.947420799272382, 00:32:11.131 "io_failed": 128, 00:32:11.131 "io_timeout": 0, 00:32:11.131 "avg_latency_us": 99650.4298600934, 00:32:11.131 "min_latency_us": 2383.1272727272726, 00:32:11.131 "max_latency_us": 7015926.69090909 00:32:11.131 } 00:32:11.131 ], 00:32:11.131 "core_count": 1 00:32:11.131 } 00:32:12.067 17:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:32:12.067 17:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:32:12.067 17:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:12.326 17:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:32:12.326 17:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:32:12.326 17:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:32:12.326 17:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 118239 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 118189 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 118189 ']' 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 118189 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118189 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:12.585 killing process with pid 118189 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118189' 00:32:12.585 Received shutdown signal, test time was about 9.653686 seconds 00:32:12.585 00:32:12.585 Latency(us) 00:32:12.585 [2024-11-20T17:52:54.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.585 [2024-11-20T17:52:54.373Z] =================================================================================================================== 00:32:12.585 [2024-11-20T17:52:54.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 118189 00:32:12.585 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 118189 00:32:13.152 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:13.409 [2024-11-20 17:52:54.960761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=118393 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 118393 /var/tmp/bdevperf.sock 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 118393 ']' 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:13.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:13.409 17:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:13.409 [2024-11-20 17:52:55.058400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:13.409 [2024-11-20 17:52:55.058523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118393 ] 00:32:13.667 [2024-11-20 17:52:55.203291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.667 [2024-11-20 17:52:55.337044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:14.679 17:52:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:14.679 17:52:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:32:14.679 17:52:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:14.937 17:52:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:32:15.196 NVMe0n1 00:32:15.196 17:52:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=118441 00:32:15.196 17:52:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:15.196 17:52:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:32:15.454 Running I/O for 10 seconds... 00:32:16.390 17:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:16.651 8728.00 IOPS, 34.09 MiB/s [2024-11-20T17:52:58.439Z] [2024-11-20 17:52:58.251804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.251993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb23960 is same with the state(6) to be set 00:32:16.651 [2024-11-20 17:52:58.252520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.252981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.252992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.253005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.651 [2024-11-20 17:52:58.253021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.651 [2024-11-20 17:52:58.253034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.652 [2024-11-20 17:52:58.253401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.253988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.652 [2024-11-20 17:52:58.253999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.652 [2024-11-20 17:52:58.254012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.653 [2024-11-20 17:52:58.254023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.653 [2024-11-20 17:52:58.254047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.653 [2024-11-20 17:52:58.254086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.653 [2024-11-20 17:52:58.254110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.653 [2024-11-20 17:52:58.254136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.653 [2024-11-20 17:52:58.254161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.653 [2024-11-20 17:52:58.254185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.254977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.255009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.653 [2024-11-20 17:52:58.255022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.653 [2024-11-20 17:52:58.255033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.654 [2024-11-20 17:52:58.255417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:16.654 [2024-11-20 17:52:58.255806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:16.654 [2024-11-20 17:52:58.255874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:16.654 [2024-11-20 17:52:58.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86064 len:8 PRP1 0x0 PRP2 0x0 00:32:16.654 [2024-11-20 17:52:58.255896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.255968] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c5d6e0 was disconnected and freed. reset controller. 00:32:16.654 [2024-11-20 17:52:58.256087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.654 [2024-11-20 17:52:58.256107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.256121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.654 [2024-11-20 17:52:58.256132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.256144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.654 [2024-11-20 17:52:58.256155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.256167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.654 [2024-11-20 17:52:58.256178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.654 [2024-11-20 17:52:58.256188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:16.654 [2024-11-20 17:52:58.256411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.654 [2024-11-20 17:52:58.256441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:16.654 [2024-11-20 17:52:58.256605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.654 [2024-11-20 17:52:58.256643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be51c0 with addr=10.0.0.3, port=4420 00:32:16.654 [2024-11-20 17:52:58.256658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:16.655 [2024-11-20 17:52:58.256680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:16.655 [2024-11-20 17:52:58.256700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:16.655 [2024-11-20 17:52:58.256713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:16.655 [2024-11-20 17:52:58.256725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:16.655 [2024-11-20 17:52:58.256761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.655 [2024-11-20 17:52:58.256776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:16.655 17:52:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:32:17.590 5315.50 IOPS, 20.76 MiB/s [2024-11-20T17:52:59.378Z] [2024-11-20 17:52:59.257047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.590 [2024-11-20 17:52:59.257186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be51c0 with addr=10.0.0.3, port=4420 00:32:17.590 [2024-11-20 17:52:59.257210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:17.590 [2024-11-20 17:52:59.257272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:17.590 [2024-11-20 17:52:59.257302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:17.590 [2024-11-20 17:52:59.257315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:17.590 [2024-11-20 17:52:59.257331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.590 [2024-11-20 17:52:59.257372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.590 [2024-11-20 17:52:59.257389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.590 17:52:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:17.849 [2024-11-20 17:52:59.596199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:17.849 17:52:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 118441 00:32:18.676 3543.67 IOPS, 13.84 MiB/s [2024-11-20T17:53:00.464Z] [2024-11-20 17:53:00.269392] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:20.549 2657.75 IOPS, 10.38 MiB/s [2024-11-20T17:53:03.271Z] 3400.80 IOPS, 13.28 MiB/s [2024-11-20T17:53:04.207Z] 4245.50 IOPS, 16.58 MiB/s [2024-11-20T17:53:05.204Z] 4788.00 IOPS, 18.70 MiB/s [2024-11-20T17:53:06.138Z] 5239.00 IOPS, 20.46 MiB/s [2024-11-20T17:53:07.074Z] 5590.89 IOPS, 21.84 MiB/s [2024-11-20T17:53:07.074Z] 5880.80 IOPS, 22.97 MiB/s 00:32:25.286 Latency(us) 00:32:25.286 [2024-11-20T17:53:07.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.286 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:25.286 Verification LBA range: start 0x0 length 0x4000 00:32:25.286 NVMe0n1 : 10.01 5881.97 22.98 0.00 0.00 21719.96 2115.03 3035150.89 00:32:25.286 [2024-11-20T17:53:07.074Z] =================================================================================================================== 00:32:25.286 [2024-11-20T17:53:07.074Z] Total : 5881.97 22.98 0.00 0.00 21719.96 2115.03 3035150.89 00:32:25.286 { 00:32:25.286 "results": [ 00:32:25.286 { 00:32:25.286 "job": "NVMe0n1", 00:32:25.286 "core_mask": "0x4", 00:32:25.286 "workload": "verify", 00:32:25.286 "status": "finished", 00:32:25.286 "verify_range": { 00:32:25.286 "start": 0, 00:32:25.286 "length": 16384 00:32:25.286 }, 00:32:25.286 "queue_depth": 128, 00:32:25.286 "io_size": 4096, 00:32:25.286 "runtime": 10.007876, 00:32:25.286 "iops": 5881.967362505291, 00:32:25.286 "mibps": 22.976435009786293, 00:32:25.286 "io_failed": 0, 00:32:25.286 "io_timeout": 0, 00:32:25.286 "avg_latency_us": 21719.95711925081, 00:32:25.286 "min_latency_us": 2115.0254545454545, 00:32:25.286 "max_latency_us": 3035150.8945454545 00:32:25.286 } 00:32:25.286 ], 00:32:25.286 "core_count": 1 00:32:25.286 } 00:32:25.286 17:53:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=118557 00:32:25.286 17:53:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:25.286 17:53:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:32:25.545 Running I/O for 10 seconds... 00:32:26.482 17:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:26.741 8038.00 IOPS, 31.40 MiB/s [2024-11-20T17:53:08.529Z] [2024-11-20 17:53:08.373078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.741 [2024-11-20 17:53:08.373905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.373999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.374310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8db0 is same with the state(6) to be set 00:32:26.742 [2024-11-20 17:53:08.376078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.742 [2024-11-20 17:53:08.376960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.376981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.376992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.742 [2024-11-20 17:53:08.377327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.742 [2024-11-20 17:53:08.377336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.377973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.377994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.743 [2024-11-20 17:53:08.378436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.743 [2024-11-20 17:53:08.378895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.743 [2024-11-20 17:53:08.378903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.743 [2024-11-20 17:53:08.378911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:32:26.743 [2024-11-20 17:53:08.378920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.378930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.378943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.378951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.378961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.378970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.378977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.378985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.378994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.379003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.379010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.379017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.379026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.379035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.379042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.379049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.379059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.379087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.379100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.379112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.379133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.379144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.379152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.379160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.379169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.379179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.379186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.379194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.379204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.405881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.405945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.405978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.406014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.406097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.406121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.406143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.406193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.406212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.406232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.406269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.406287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.406307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.406345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.406362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.406382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:26.744 [2024-11-20 17:53:08.406418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:26.744 [2024-11-20 17:53:08.406435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:32:26.744 [2024-11-20 17:53:08.406458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406565] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c69a50 was disconnected and freed. reset controller. 00:32:26.744 17:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:32:26.744 [2024-11-20 17:53:08.406786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.744 [2024-11-20 17:53:08.406834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.744 [2024-11-20 17:53:08.406922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.744 [2024-11-20 17:53:08.406965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.406987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.744 [2024-11-20 17:53:08.407007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.744 [2024-11-20 17:53:08.407027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:26.744 [2024-11-20 17:53:08.407611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:26.744 [2024-11-20 17:53:08.407678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:26.744 [2024-11-20 17:53:08.407885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.744 [2024-11-20 17:53:08.407943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be51c0 with addr=10.0.0.3, port=4420 00:32:26.744 [2024-11-20 17:53:08.407969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:26.744 [2024-11-20 17:53:08.408006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:26.744 [2024-11-20 17:53:08.408040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:26.744 [2024-11-20 17:53:08.408061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:26.744 [2024-11-20 17:53:08.408111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:26.744 [2024-11-20 17:53:08.408155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:26.744 [2024-11-20 17:53:08.408190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.679 4784.50 IOPS, 18.69 MiB/s [2024-11-20T17:53:09.467Z] [2024-11-20 17:53:09.408422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.679 [2024-11-20 17:53:09.408529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be51c0 with addr=10.0.0.3, port=4420 00:32:27.679 [2024-11-20 17:53:09.408554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:27.679 [2024-11-20 17:53:09.408588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:27.679 [2024-11-20 17:53:09.408609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:27.679 [2024-11-20 17:53:09.408620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:27.679 [2024-11-20 17:53:09.408635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.679 [2024-11-20 17:53:09.408671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.679 [2024-11-20 17:53:09.408685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:28.872 3189.67 IOPS, 12.46 MiB/s [2024-11-20T17:53:10.660Z] [2024-11-20 17:53:10.408890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.872 [2024-11-20 17:53:10.408985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be51c0 with addr=10.0.0.3, port=4420 00:32:28.872 [2024-11-20 17:53:10.409003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:28.872 [2024-11-20 17:53:10.409034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:28.872 [2024-11-20 17:53:10.409054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:28.872 [2024-11-20 17:53:10.409079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:28.872 [2024-11-20 17:53:10.409093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:28.872 [2024-11-20 17:53:10.409129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:28.872 [2024-11-20 17:53:10.409142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:29.697 2392.25 IOPS, 9.34 MiB/s [2024-11-20T17:53:11.485Z] 17:53:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:29.697 [2024-11-20 17:53:11.412317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.697 [2024-11-20 17:53:11.412433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be51c0 with addr=10.0.0.3, port=4420 00:32:29.697 [2024-11-20 17:53:11.412454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be51c0 is same with the state(6) to be set 00:32:29.697 [2024-11-20 17:53:11.412696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be51c0 (9): Bad file descriptor 00:32:29.697 [2024-11-20 17:53:11.412966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:29.697 [2024-11-20 17:53:11.412987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:29.697 [2024-11-20 17:53:11.413000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:29.697 [2024-11-20 17:53:11.416674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:29.697 [2024-11-20 17:53:11.416701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:29.955 [2024-11-20 17:53:11.717134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:29.955 17:53:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 118557 00:32:30.781 1913.80 IOPS, 7.48 MiB/s [2024-11-20T17:53:12.569Z] [2024-11-20 17:53:12.455446] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:32.723 2614.17 IOPS, 10.21 MiB/s [2024-11-20T17:53:15.445Z] 3373.29 IOPS, 13.18 MiB/s [2024-11-20T17:53:16.379Z] 3983.62 IOPS, 15.56 MiB/s [2024-11-20T17:53:17.315Z] 4469.00 IOPS, 17.46 MiB/s [2024-11-20T17:53:17.315Z] 4841.10 IOPS, 18.91 MiB/s 00:32:35.527 Latency(us) 00:32:35.527 [2024-11-20T17:53:17.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.527 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:35.527 Verification LBA range: start 0x0 length 0x4000 00:32:35.527 NVMe0n1 : 10.00 4841.82 18.91 3679.79 0.00 14994.32 700.04 3050402.91 00:32:35.527 [2024-11-20T17:53:17.315Z] =================================================================================================================== 00:32:35.527 [2024-11-20T17:53:17.315Z] Total : 4841.82 18.91 3679.79 0.00 14994.32 0.00 3050402.91 00:32:35.527 { 00:32:35.527 "results": [ 00:32:35.527 { 00:32:35.527 "job": "NVMe0n1", 00:32:35.527 "core_mask": "0x4", 00:32:35.527 "workload": "verify", 00:32:35.527 "status": "finished", 00:32:35.527 "verify_range": { 00:32:35.527 "start": 0, 00:32:35.527 "length": 16384 00:32:35.527 }, 00:32:35.527 "queue_depth": 128, 00:32:35.527 "io_size": 4096, 00:32:35.527 "runtime": 10.00409, 00:32:35.527 "iops": 4841.819695744441, 00:32:35.527 "mibps": 18.91335818650172, 00:32:35.527 "io_failed": 36813, 00:32:35.527 "io_timeout": 0, 00:32:35.527 "avg_latency_us": 14994.320813085638, 00:32:35.527 "min_latency_us": 700.0436363636363, 00:32:35.527 "max_latency_us": 3050402.909090909 00:32:35.527 } 00:32:35.527 ], 00:32:35.527 "core_count": 1 00:32:35.527 } 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 118393 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 118393 ']' 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 118393 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118393 00:32:35.527 killing process with pid 118393 00:32:35.527 Received shutdown signal, test time was about 10.000000 seconds 00:32:35.527 00:32:35.527 Latency(us) 00:32:35.527 [2024-11-20T17:53:17.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.527 [2024-11-20T17:53:17.315Z] =================================================================================================================== 00:32:35.527 [2024-11-20T17:53:17.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118393' 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 118393 00:32:35.527 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 118393 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=118690 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 118690 /var/tmp/bdevperf.sock 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 118690 ']' 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:35.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:35.786 17:53:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:36.045 [2024-11-20 17:53:17.612484] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:36.045 [2024-11-20 17:53:17.612574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118690 ] 00:32:36.045 [2024-11-20 17:53:17.748815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.303 [2024-11-20 17:53:17.883047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.244 17:53:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:37.244 17:53:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:32:37.244 17:53:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=118718 00:32:37.244 17:53:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 118690 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:32:37.244 17:53:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:32:37.503 17:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:32:37.760 NVMe0n1 00:32:37.760 17:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=118776 00:32:37.760 17:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:37.760 17:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:32:38.019 Running I/O for 10 seconds... 00:32:38.953 17:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:39.213 16731.00 IOPS, 65.36 MiB/s [2024-11-20T17:53:21.001Z] [2024-11-20 17:53:20.840619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.213 [2024-11-20 17:53:20.840683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.213 [2024-11-20 17:53:20.840694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.840989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec7f0 is same with the state(6) to be set 00:32:39.214 [2024-11-20 17:53:20.841750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.841983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.214 [2024-11-20 17:53:20.842005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.214 [2024-11-20 17:53:20.842017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.215 [2024-11-20 17:53:20.842958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.215 [2024-11-20 17:53:20.842973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.842987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.216 [2024-11-20 17:53:20.843971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.216 [2024-11-20 17:53:20.843986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.843997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.844984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.844996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.845011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.845023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.845047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.845060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.845132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.217 [2024-11-20 17:53:20.845145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.217 [2024-11-20 17:53:20.845188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:39.218 [2024-11-20 17:53:20.845201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:39.218 [2024-11-20 17:53:20.845212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123304 len:8 PRP1 0x0 PRP2 0x0 00:32:39.218 [2024-11-20 17:53:20.845225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.218 [2024-11-20 17:53:20.845304] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13406e0 was disconnected and freed. reset controller. 00:32:39.218 [2024-11-20 17:53:20.845412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.218 [2024-11-20 17:53:20.845428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.218 [2024-11-20 17:53:20.845439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.218 [2024-11-20 17:53:20.845448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.218 [2024-11-20 17:53:20.845458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.218 [2024-11-20 17:53:20.845467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.218 [2024-11-20 17:53:20.845477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.218 [2024-11-20 17:53:20.845486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.218 [2024-11-20 17:53:20.845495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c81c0 is same with the state(6) to be set 00:32:39.218 [2024-11-20 17:53:20.845787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.218 [2024-11-20 17:53:20.845828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c81c0 (9): Bad file descriptor 00:32:39.218 [2024-11-20 17:53:20.846020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.218 [2024-11-20 17:53:20.846043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c81c0 with addr=10.0.0.3, port=4420 00:32:39.218 [2024-11-20 17:53:20.846054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c81c0 is same with the state(6) to be set 00:32:39.218 [2024-11-20 17:53:20.846073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c81c0 (9): Bad file descriptor 00:32:39.218 [2024-11-20 17:53:20.846091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:39.218 [2024-11-20 17:53:20.846100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:39.218 [2024-11-20 17:53:20.846112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:39.218 [2024-11-20 17:53:20.846133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.218 [2024-11-20 17:53:20.846159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.218 17:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 118776 00:32:41.084 9808.00 IOPS, 38.31 MiB/s [2024-11-20T17:53:22.872Z] 6538.67 IOPS, 25.54 MiB/s [2024-11-20T17:53:22.872Z] [2024-11-20 17:53:22.859677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.084 [2024-11-20 17:53:22.860379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c81c0 with addr=10.0.0.3, port=4420 00:32:41.084 [2024-11-20 17:53:22.860410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c81c0 is same with the state(6) to be set 00:32:41.084 [2024-11-20 17:53:22.860457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c81c0 (9): Bad file descriptor 00:32:41.084 [2024-11-20 17:53:22.860482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.084 [2024-11-20 17:53:22.860497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.084 [2024-11-20 17:53:22.860513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.084 [2024-11-20 17:53:22.860554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.084 [2024-11-20 17:53:22.860570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.952 4904.00 IOPS, 19.16 MiB/s [2024-11-20T17:53:24.999Z] 3923.20 IOPS, 15.32 MiB/s [2024-11-20T17:53:24.999Z] [2024-11-20 17:53:24.860829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.211 [2024-11-20 17:53:24.860921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c81c0 with addr=10.0.0.3, port=4420 00:32:43.211 [2024-11-20 17:53:24.860940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c81c0 is same with the state(6) to be set 00:32:43.211 [2024-11-20 17:53:24.860972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c81c0 (9): Bad file descriptor 00:32:43.211 [2024-11-20 17:53:24.860993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.211 [2024-11-20 17:53:24.861003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.211 [2024-11-20 17:53:24.861015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.211 [2024-11-20 17:53:24.861050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.211 [2024-11-20 17:53:24.861062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.151 3269.33 IOPS, 12.77 MiB/s [2024-11-20T17:53:26.939Z] 2802.29 IOPS, 10.95 MiB/s [2024-11-20T17:53:26.939Z] [2024-11-20 17:53:26.861159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.151 [2024-11-20 17:53:26.861247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.151 [2024-11-20 17:53:26.861260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.151 [2024-11-20 17:53:26.861275] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:45.151 [2024-11-20 17:53:26.861310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.100 2452.00 IOPS, 9.58 MiB/s 00:32:46.100 Latency(us) 00:32:46.100 [2024-11-20T17:53:27.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.101 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:32:46.101 NVMe0n1 : 8.18 2397.96 9.37 15.65 0.00 52967.80 3664.06 7015926.69 00:32:46.101 [2024-11-20T17:53:27.889Z] =================================================================================================================== 00:32:46.101 [2024-11-20T17:53:27.889Z] Total : 2397.96 9.37 15.65 0.00 52967.80 3664.06 7015926.69 00:32:46.101 { 00:32:46.101 "results": [ 00:32:46.101 { 00:32:46.101 "job": "NVMe0n1", 00:32:46.101 "core_mask": "0x4", 00:32:46.101 "workload": "randread", 00:32:46.101 "status": "finished", 00:32:46.101 "queue_depth": 128, 00:32:46.101 "io_size": 4096, 00:32:46.101 "runtime": 8.180281, 00:32:46.101 "iops": 2397.9616348142563, 00:32:46.101 "mibps": 9.367037635993189, 00:32:46.101 "io_failed": 128, 00:32:46.101 "io_timeout": 0, 00:32:46.101 "avg_latency_us": 52967.79749521143, 00:32:46.101 "min_latency_us": 3664.0581818181818, 00:32:46.101 "max_latency_us": 7015926.69090909 00:32:46.101 } 00:32:46.101 ], 00:32:46.101 "core_count": 1 00:32:46.101 } 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:46.101 Attaching 5 probes... 00:32:46.101 1597.138772: reset bdev controller NVMe0 00:32:46.101 1597.273375: reconnect bdev controller NVMe0 00:32:46.101 3610.773670: reconnect delay bdev controller NVMe0 00:32:46.101 3610.819318: reconnect bdev controller NVMe0 00:32:46.101 5612.017833: reconnect delay bdev controller NVMe0 00:32:46.101 5612.047425: reconnect bdev controller NVMe0 00:32:46.101 7612.519282: reconnect delay bdev controller NVMe0 00:32:46.101 7612.545359: reconnect bdev controller NVMe0 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 118718 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 118690 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 118690 ']' 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 118690 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:32:46.101 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:46.360 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118690 00:32:46.360 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:46.360 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:46.360 killing process with pid 118690 00:32:46.360 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118690' 00:32:46.360 Received shutdown signal, test time was about 8.247819 seconds 00:32:46.360 00:32:46.360 Latency(us) 00:32:46.360 [2024-11-20T17:53:28.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.360 [2024-11-20T17:53:28.148Z] =================================================================================================================== 00:32:46.360 [2024-11-20T17:53:28.148Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.360 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 118690 00:32:46.360 17:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 118690 00:32:46.619 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.877 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.877 rmmod nvme_tcp 00:32:46.877 rmmod nvme_fabrics 00:32:46.878 rmmod nvme_keyring 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 118107 ']' 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 118107 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 118107 ']' 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 118107 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118107 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118107' 00:32:47.136 killing process with pid 118107 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 118107 00:32:47.136 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 118107 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:47.394 17:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.394 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.658 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:32:47.658 00:32:47.658 real 0m48.968s 00:32:47.658 user 2m25.013s 00:32:47.658 sys 0m5.451s 00:32:47.658 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.658 ************************************ 00:32:47.658 17:53:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:47.658 END TEST nvmf_timeout 00:32:47.658 ************************************ 00:32:47.658 17:53:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:32:47.658 17:53:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:47.658 00:32:47.658 real 6m50.591s 00:32:47.658 user 18m42.161s 00:32:47.658 sys 1m17.107s 00:32:47.658 17:53:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.658 ************************************ 00:32:47.658 END TEST nvmf_host 00:32:47.658 ************************************ 00:32:47.658 17:53:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.658 17:53:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:47.658 17:53:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:47.658 17:53:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:47.658 17:53:29 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:47.658 17:53:29 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.659 17:53:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.659 ************************************ 00:32:47.659 START TEST nvmf_target_core_interrupt_mode 00:32:47.659 ************************************ 00:32:47.659 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:47.659 * Looking for test storage... 00:32:47.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:32:47.659 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:47.659 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:32:47.659 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:47.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.920 --rc genhtml_branch_coverage=1 00:32:47.920 --rc genhtml_function_coverage=1 00:32:47.920 --rc genhtml_legend=1 00:32:47.920 --rc geninfo_all_blocks=1 00:32:47.920 --rc geninfo_unexecuted_blocks=1 00:32:47.920 00:32:47.920 ' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:47.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.920 --rc genhtml_branch_coverage=1 00:32:47.920 --rc genhtml_function_coverage=1 00:32:47.920 --rc genhtml_legend=1 00:32:47.920 --rc geninfo_all_blocks=1 00:32:47.920 --rc geninfo_unexecuted_blocks=1 00:32:47.920 00:32:47.920 ' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:47.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.920 --rc genhtml_branch_coverage=1 00:32:47.920 --rc genhtml_function_coverage=1 00:32:47.920 --rc genhtml_legend=1 00:32:47.920 --rc geninfo_all_blocks=1 00:32:47.920 --rc geninfo_unexecuted_blocks=1 00:32:47.920 00:32:47.920 ' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:47.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.920 --rc genhtml_branch_coverage=1 00:32:47.920 --rc genhtml_function_coverage=1 00:32:47.920 --rc genhtml_legend=1 00:32:47.920 --rc geninfo_all_blocks=1 00:32:47.920 --rc geninfo_unexecuted_blocks=1 00:32:47.920 00:32:47.920 ' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.920 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:47.921 ************************************ 00:32:47.921 START TEST nvmf_abort 00:32:47.921 ************************************ 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:47.921 * Looking for test storage... 00:32:47.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:32:47.921 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.180 --rc genhtml_branch_coverage=1 00:32:48.180 --rc genhtml_function_coverage=1 00:32:48.180 --rc genhtml_legend=1 00:32:48.180 --rc geninfo_all_blocks=1 00:32:48.180 --rc geninfo_unexecuted_blocks=1 00:32:48.180 00:32:48.180 ' 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.180 --rc genhtml_branch_coverage=1 00:32:48.180 --rc genhtml_function_coverage=1 00:32:48.180 --rc genhtml_legend=1 00:32:48.180 --rc geninfo_all_blocks=1 00:32:48.180 --rc geninfo_unexecuted_blocks=1 00:32:48.180 00:32:48.180 ' 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.180 --rc genhtml_branch_coverage=1 00:32:48.180 --rc genhtml_function_coverage=1 00:32:48.180 --rc genhtml_legend=1 00:32:48.180 --rc geninfo_all_blocks=1 00:32:48.180 --rc geninfo_unexecuted_blocks=1 00:32:48.180 00:32:48.180 ' 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:48.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.180 --rc genhtml_branch_coverage=1 00:32:48.180 --rc genhtml_function_coverage=1 00:32:48.180 --rc genhtml_legend=1 00:32:48.180 --rc geninfo_all_blocks=1 00:32:48.180 --rc geninfo_unexecuted_blocks=1 00:32:48.180 00:32:48.180 ' 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.180 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@456 -- # nvmf_veth_init 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:48.181 Cannot find device "nvmf_init_br" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:48.181 Cannot find device "nvmf_init_br2" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:48.181 Cannot find device "nvmf_tgt_br" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:48.181 Cannot find device "nvmf_tgt_br2" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:48.181 Cannot find device "nvmf_init_br" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:48.181 Cannot find device "nvmf_init_br2" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:48.181 Cannot find device "nvmf_tgt_br" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:48.181 Cannot find device "nvmf_tgt_br2" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:48.181 Cannot find device "nvmf_br" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:48.181 Cannot find device "nvmf_init_if" 00:32:48.181 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:48.182 Cannot find device "nvmf_init_if2" 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:48.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:48.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:48.182 17:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:48.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:48.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:32:48.440 00:32:48.440 --- 10.0.0.3 ping statistics --- 00:32:48.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.440 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:32:48.440 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:48.440 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:48.440 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.159 ms 00:32:48.440 00:32:48.440 --- 10.0.0.4 ping statistics --- 00:32:48.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.440 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:48.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:32:48.698 00:32:48.698 --- 10.0.0.1 ping statistics --- 00:32:48.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.698 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:48.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:32:48.698 00:32:48.698 --- 10.0.0.2 ping statistics --- 00:32:48.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.698 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@457 -- # return 0 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:48.698 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=119191 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 119191 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 119191 ']' 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.699 17:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.699 [2024-11-20 17:53:30.350320] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:48.699 [2024-11-20 17:53:30.351932] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:48.699 [2024-11-20 17:53:30.352037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.957 [2024-11-20 17:53:30.503299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:48.957 [2024-11-20 17:53:30.608973] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.957 [2024-11-20 17:53:30.609047] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.957 [2024-11-20 17:53:30.609082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.957 [2024-11-20 17:53:30.609104] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.957 [2024-11-20 17:53:30.609117] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.957 [2024-11-20 17:53:30.609481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.957 [2024-11-20 17:53:30.609760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.957 [2024-11-20 17:53:30.609767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.957 [2024-11-20 17:53:30.705247] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:48.958 [2024-11-20 17:53:30.705416] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:48.958 [2024-11-20 17:53:30.706555] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:48.958 [2024-11-20 17:53:30.712389] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 [2024-11-20 17:53:31.515224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 Malloc0 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 Delay0 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 [2024-11-20 17:53:31.587496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.894 17:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:50.158 [2024-11-20 17:53:31.765057] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:52.068 Initializing NVMe Controllers 00:32:52.068 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:32:52.068 controller IO queue size 128 less than required 00:32:52.068 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:52.068 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:52.068 Initialization complete. Launching workers. 00:32:52.068 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26658 00:32:52.068 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26715, failed to submit 66 00:32:52.068 success 26658, unsuccessful 57, failed 0 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:52.068 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:52.327 rmmod nvme_tcp 00:32:52.327 rmmod nvme_fabrics 00:32:52.327 rmmod nvme_keyring 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 119191 ']' 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 119191 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 119191 ']' 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 119191 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119191 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:52.327 killing process with pid 119191 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119191' 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 119191 00:32:52.327 17:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 119191 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:52.585 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:52.586 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:52.586 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:32:52.843 00:32:52.843 real 0m5.010s 00:32:52.843 user 0m9.651s 00:32:52.843 sys 0m1.443s 00:32:52.843 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:52.843 ************************************ 00:32:52.843 END TEST nvmf_abort 00:32:52.843 ************************************ 00:32:52.844 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:52.844 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:52.844 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:52.844 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:52.844 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:52.844 ************************************ 00:32:52.844 START TEST nvmf_ns_hotplug_stress 00:32:52.844 ************************************ 00:32:52.844 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:53.102 * Looking for test storage... 00:32:53.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:53.102 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:53.102 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:32:53.102 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.103 --rc genhtml_branch_coverage=1 00:32:53.103 --rc genhtml_function_coverage=1 00:32:53.103 --rc genhtml_legend=1 00:32:53.103 --rc geninfo_all_blocks=1 00:32:53.103 --rc geninfo_unexecuted_blocks=1 00:32:53.103 00:32:53.103 ' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.103 --rc genhtml_branch_coverage=1 00:32:53.103 --rc genhtml_function_coverage=1 00:32:53.103 --rc genhtml_legend=1 00:32:53.103 --rc geninfo_all_blocks=1 00:32:53.103 --rc geninfo_unexecuted_blocks=1 00:32:53.103 00:32:53.103 ' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.103 --rc genhtml_branch_coverage=1 00:32:53.103 --rc genhtml_function_coverage=1 00:32:53.103 --rc genhtml_legend=1 00:32:53.103 --rc geninfo_all_blocks=1 00:32:53.103 --rc geninfo_unexecuted_blocks=1 00:32:53.103 00:32:53.103 ' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.103 --rc genhtml_branch_coverage=1 00:32:53.103 --rc genhtml_function_coverage=1 00:32:53.103 --rc genhtml_legend=1 00:32:53.103 --rc geninfo_all_blocks=1 00:32:53.103 --rc geninfo_unexecuted_blocks=1 00:32:53.103 00:32:53.103 ' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.103 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:53.104 Cannot find device "nvmf_init_br" 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:53.104 Cannot find device "nvmf_init_br2" 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:53.104 Cannot find device "nvmf_tgt_br" 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:53.104 Cannot find device "nvmf_tgt_br2" 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:32:53.104 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:53.362 Cannot find device "nvmf_init_br" 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:53.362 Cannot find device "nvmf_init_br2" 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:53.362 Cannot find device "nvmf_tgt_br" 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:53.362 Cannot find device "nvmf_tgt_br2" 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:53.362 Cannot find device "nvmf_br" 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:53.362 Cannot find device "nvmf_init_if" 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:53.362 Cannot find device "nvmf_init_if2" 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:53.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:53.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:53.362 17:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:53.362 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:53.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:53.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:32:53.620 00:32:53.620 --- 10.0.0.3 ping statistics --- 00:32:53.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.620 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:53.620 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:53.620 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:32:53.620 00:32:53.620 --- 10.0.0.4 ping statistics --- 00:32:53.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.620 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:53.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:32:53.620 00:32:53.620 --- 10.0.0.1 ping statistics --- 00:32:53.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.620 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:53.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:32:53.620 00:32:53.620 --- 10.0.0.2 ping statistics --- 00:32:53.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.620 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # return 0 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=119500 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 119500 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 119500 ']' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:53.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:53.620 17:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:53.620 [2024-11-20 17:53:35.317736] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:53.620 [2024-11-20 17:53:35.319250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:53.620 [2024-11-20 17:53:35.319939] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.878 [2024-11-20 17:53:35.472706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:53.878 [2024-11-20 17:53:35.574654] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.878 [2024-11-20 17:53:35.574708] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.878 [2024-11-20 17:53:35.574719] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.878 [2024-11-20 17:53:35.574728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.878 [2024-11-20 17:53:35.574735] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.878 [2024-11-20 17:53:35.574853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.878 [2024-11-20 17:53:35.574959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.878 [2024-11-20 17:53:35.574963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.878 [2024-11-20 17:53:35.661851] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:53.878 [2024-11-20 17:53:35.662307] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:53.878 [2024-11-20 17:53:35.662996] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:54.136 [2024-11-20 17:53:35.667297] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:55.093 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:55.093 [2024-11-20 17:53:36.840268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.363 17:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:55.621 17:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:55.879 [2024-11-20 17:53:37.504820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:55.879 17:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:56.137 17:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:56.395 Malloc0 00:32:56.395 17:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:56.962 Delay0 00:32:56.962 17:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.220 17:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:57.479 NULL1 00:32:57.479 17:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:57.738 17:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=119638 00:32:57.738 17:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:57.738 17:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:32:57.738 17:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.113 Read completed with error (sct=0, sc=11) 00:32:59.113 17:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:59.371 17:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:59.371 17:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:59.940 true 00:32:59.940 17:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:32:59.940 17:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.515 17:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.773 17:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:00.773 17:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:01.032 true 00:33:01.290 17:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:01.290 17:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:01.857 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:01.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:01.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:01.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.115 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:02.115 17:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:02.373 true 00:33:02.373 17:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:02.373 17:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.309 17:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.309 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:03.309 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:03.877 true 00:33:03.877 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:03.877 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.877 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.135 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:04.135 17:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:04.395 true 00:33:04.395 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:04.395 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.653 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.912 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:04.912 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:05.171 true 00:33:05.430 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:05.430 17:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.997 17:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:06.564 17:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:06.564 17:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:06.823 true 00:33:06.823 17:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:06.823 17:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.080 17:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.338 17:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:07.338 17:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:07.596 true 00:33:07.596 17:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:07.596 17:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.853 17:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:08.111 17:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:08.111 17:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:08.369 true 00:33:08.369 17:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:08.369 17:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.322 17:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.591 17:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:09.591 17:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:09.861 true 00:33:09.861 17:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:09.861 17:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.119 17:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:10.377 17:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:10.377 17:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:10.635 true 00:33:10.635 17:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:10.635 17:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.893 17:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:11.151 17:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:11.152 17:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:11.719 true 00:33:11.719 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:11.719 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:11.976 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:12.234 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:12.234 17:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:12.492 true 00:33:12.492 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:12.492 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.750 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.316 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:13.316 17:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:13.316 true 00:33:13.573 17:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:13.573 17:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.835 17:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:14.107 17:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:14.107 17:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:14.365 true 00:33:14.623 17:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:14.623 17:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:15.189 17:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:15.447 17:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:15.447 17:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:15.704 true 00:33:15.961 17:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:15.961 17:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.332 17:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.591 17:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:17.591 17:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:17.858 true 00:33:17.858 17:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:17.858 17:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:18.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.432 17:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:18.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:18.948 17:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:18.948 17:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:19.208 true 00:33:19.208 17:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:19.208 17:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.143 17:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:20.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.401 17:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:20.401 17:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:20.659 true 00:33:20.659 17:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:20.659 17:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.918 17:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:21.176 17:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:21.176 17:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:21.435 true 00:33:21.435 17:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:21.435 17:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.015 17:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:22.273 17:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:33:22.273 17:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:22.532 true 00:33:22.532 17:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:22.532 17:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.790 17:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:23.049 17:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:33:23.049 17:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:23.307 true 00:33:23.307 17:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:23.307 17:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:24.242 17:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:24.500 17:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:33:24.500 17:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:24.759 true 00:33:24.759 17:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:24.759 17:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:25.327 17:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:25.585 17:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:25.585 17:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:25.844 true 00:33:25.844 17:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:25.844 17:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:26.102 17:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:26.360 17:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:26.360 17:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:26.618 true 00:33:26.876 17:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:26.876 17:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:27.134 17:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:27.392 17:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:27.392 17:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:27.649 true 00:33:27.649 17:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:27.649 17:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:27.907 17:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.165 Initializing NVMe Controllers 00:33:28.165 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:33:28.165 Controller IO queue size 128, less than required. 00:33:28.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:28.165 Controller IO queue size 128, less than required. 00:33:28.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:28.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:28.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:28.165 Initialization complete. Launching workers. 00:33:28.165 ======================================================== 00:33:28.165 Latency(us) 00:33:28.165 Device Information : IOPS MiB/s Average min max 00:33:28.165 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1602.42 0.78 33159.83 3336.20 1136106.97 00:33:28.165 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8194.00 4.00 15621.61 2504.04 639901.74 00:33:28.165 ======================================================== 00:33:28.165 Total : 9796.42 4.78 18490.38 2504.04 1136106.97 00:33:28.165 00:33:28.165 17:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:28.165 17:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:28.422 true 00:33:28.422 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119638 00:33:28.422 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (119638) - No such process 00:33:28.422 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 119638 00:33:28.422 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.680 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:29.245 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:29.245 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:29.245 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:29.245 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:29.245 17:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:29.245 null0 00:33:29.245 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:29.245 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:29.245 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:29.811 null1 00:33:29.811 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:29.811 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:29.811 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:30.069 null2 00:33:30.069 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:30.069 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:30.069 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:30.327 null3 00:33:30.327 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:30.327 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:30.327 17:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:30.584 null4 00:33:30.585 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:30.585 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:30.585 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:31.153 null5 00:33:31.153 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:31.153 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:31.153 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:31.411 null6 00:33:31.411 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:31.411 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:31.411 17:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:31.670 null7 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 120598 120599 120602 120604 120606 120607 120609 120612 00:33:31.670 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:31.929 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:31.929 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:31.929 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:31.929 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.929 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:31.929 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:32.186 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:32.186 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:32.444 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.444 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.444 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:32.444 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.444 17:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.444 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:32.702 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:32.960 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.219 17:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:33.478 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:33.736 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:33.736 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.736 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.736 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:33.737 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:33.737 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.737 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.737 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:33.737 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.737 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.737 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:33.995 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:34.253 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.253 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.253 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.253 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:34.253 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:34.253 17:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:34.253 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:34.253 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.253 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.253 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:34.253 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:34.511 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.512 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:34.770 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:35.029 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:35.332 17:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:35.332 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.332 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.332 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:35.332 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.332 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.332 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:35.332 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:35.589 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:35.847 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:36.105 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:36.106 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:36.364 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.364 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.364 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:36.364 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:36.364 17:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:36.364 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:36.364 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.364 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.364 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:36.364 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:36.622 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.623 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.623 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.623 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:36.623 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.623 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.623 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:36.887 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.145 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:37.405 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.405 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.405 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:37.405 17:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.405 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:37.405 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:37.405 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:37.405 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:37.405 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.662 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:37.921 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.180 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.439 17:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.439 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.697 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.697 rmmod nvme_tcp 00:33:38.954 rmmod nvme_fabrics 00:33:38.954 rmmod nvme_keyring 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 119500 ']' 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 119500 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 119500 ']' 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 119500 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119500 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:38.954 killing process with pid 119500 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119500' 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 119500 00:33:38.954 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 119500 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:39.211 17:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:33:39.469 00:33:39.469 real 0m46.537s 00:33:39.469 user 3m26.164s 00:33:39.469 sys 0m19.942s 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:39.469 ************************************ 00:33:39.469 END TEST nvmf_ns_hotplug_stress 00:33:39.469 ************************************ 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:39.469 ************************************ 00:33:39.469 START TEST nvmf_delete_subsystem 00:33:39.469 ************************************ 00:33:39.469 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:39.469 * Looking for test storage... 00:33:39.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.728 --rc genhtml_branch_coverage=1 00:33:39.728 --rc genhtml_function_coverage=1 00:33:39.728 --rc genhtml_legend=1 00:33:39.728 --rc geninfo_all_blocks=1 00:33:39.728 --rc geninfo_unexecuted_blocks=1 00:33:39.728 00:33:39.728 ' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.728 --rc genhtml_branch_coverage=1 00:33:39.728 --rc genhtml_function_coverage=1 00:33:39.728 --rc genhtml_legend=1 00:33:39.728 --rc geninfo_all_blocks=1 00:33:39.728 --rc geninfo_unexecuted_blocks=1 00:33:39.728 00:33:39.728 ' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.728 --rc genhtml_branch_coverage=1 00:33:39.728 --rc genhtml_function_coverage=1 00:33:39.728 --rc genhtml_legend=1 00:33:39.728 --rc geninfo_all_blocks=1 00:33:39.728 --rc geninfo_unexecuted_blocks=1 00:33:39.728 00:33:39.728 ' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.728 --rc genhtml_branch_coverage=1 00:33:39.728 --rc genhtml_function_coverage=1 00:33:39.728 --rc genhtml_legend=1 00:33:39.728 --rc geninfo_all_blocks=1 00:33:39.728 --rc geninfo_unexecuted_blocks=1 00:33:39.728 00:33:39.728 ' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.728 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:39.729 Cannot find device "nvmf_init_br" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:39.729 Cannot find device "nvmf_init_br2" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:39.729 Cannot find device "nvmf_tgt_br" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:39.729 Cannot find device "nvmf_tgt_br2" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:39.729 Cannot find device "nvmf_init_br" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:39.729 Cannot find device "nvmf_init_br2" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:39.729 Cannot find device "nvmf_tgt_br" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:39.729 Cannot find device "nvmf_tgt_br2" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:39.729 Cannot find device "nvmf_br" 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:33:39.729 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:39.987 Cannot find device "nvmf_init_if" 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:39.987 Cannot find device "nvmf_init_if2" 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:39.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:39.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:39.987 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:39.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:39.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:33:39.988 00:33:39.988 --- 10.0.0.3 ping statistics --- 00:33:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.988 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:39.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:39.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:33:39.988 00:33:39.988 --- 10.0.0.4 ping statistics --- 00:33:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.988 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:39.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:33:39.988 00:33:39.988 --- 10.0.0.1 ping statistics --- 00:33:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.988 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:33:39.988 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:39.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:33:39.988 00:33:39.988 --- 10.0.0.2 ping statistics --- 00:33:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.988 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # return 0 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=121998 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 121998 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 121998 ']' 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:40.247 17:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:40.247 [2024-11-20 17:54:21.872805] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:40.247 [2024-11-20 17:54:21.873915] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:40.247 [2024-11-20 17:54:21.873997] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.247 [2024-11-20 17:54:22.012534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:40.506 [2024-11-20 17:54:22.109257] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.506 [2024-11-20 17:54:22.109323] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.506 [2024-11-20 17:54:22.109335] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.506 [2024-11-20 17:54:22.109343] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.506 [2024-11-20 17:54:22.109351] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.506 [2024-11-20 17:54:22.109540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.506 [2024-11-20 17:54:22.109547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.506 [2024-11-20 17:54:22.200325] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:40.506 [2024-11-20 17:54:22.201118] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:40.506 [2024-11-20 17:54:22.201227] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:41.440 [2024-11-20 17:54:22.978616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.440 17:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:41.440 [2024-11-20 17:54:23.006967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:41.440 NULL1 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:41.440 Delay0 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122049 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:41.440 17:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:41.440 [2024-11-20 17:54:23.217623] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:43.340 17:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.340 17:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.340 17:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 [2024-11-20 17:54:25.260605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440390 is same with the state(6) to be set 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Write completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 Read completed with error (sct=0, sc=8) 00:33:43.599 starting I/O failed: -6 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Write completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 Read completed with error (sct=0, sc=8) 00:33:43.600 starting I/O failed: -6 00:33:43.600 starting I/O failed: -6 00:33:43.600 starting I/O failed: -6 00:33:43.600 starting I/O failed: -6 00:33:43.600 starting I/O failed: -6 00:33:43.600 starting I/O failed: -6 00:33:43.600 starting I/O failed: -6 00:33:43.600 starting I/O failed: -6 00:33:44.538 [2024-11-20 17:54:26.234557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145db90 is same with the state(6) to be set 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 [2024-11-20 17:54:26.260664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441530 is same with the state(6) to be set 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 [2024-11-20 17:54:26.260890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145df50 is same with the state(6) to be set 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 [2024-11-20 17:54:26.261567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f124000cfe0 is same with the state(6) to be set 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Write completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.538 Read completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 Read completed with error (sct=0, sc=8) 00:33:44.539 Write completed with error (sct=0, sc=8) 00:33:44.539 [2024-11-20 17:54:26.262346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f124000d7c0 is same with the state(6) to be set 00:33:44.539 Initializing NVMe Controllers 00:33:44.539 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:33:44.539 Controller IO queue size 128, less than required. 00:33:44.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:44.539 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:44.539 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:44.539 Initialization complete. Launching workers. 00:33:44.539 ======================================================== 00:33:44.539 Latency(us) 00:33:44.539 Device Information : IOPS MiB/s Average min max 00:33:44.539 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.72 0.08 904015.13 487.27 1016082.04 00:33:44.539 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.13 0.09 929037.21 1095.79 1021199.63 00:33:44.539 ======================================================== 00:33:44.539 Total : 341.85 0.17 916833.97 487.27 1021199.63 00:33:44.539 00:33:44.539 [2024-11-20 17:54:26.263636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145db90 (9): Bad file descriptor 00:33:44.539 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:44.539 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.539 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:44.539 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122049 00:33:44.539 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122049 00:33:45.105 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122049) - No such process 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122049 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 122049 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 122049 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:45.105 [2024-11-20 17:54:26.786853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=122090 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:45.105 17:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:45.363 [2024-11-20 17:54:26.960095] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:45.621 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:45.621 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:45.621 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:46.187 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:46.187 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:46.187 17:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:46.754 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:46.754 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:46.754 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:47.319 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:47.319 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:47.319 17:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:47.576 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:47.576 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:47.576 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:48.142 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:48.142 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:48.142 17:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:48.449 Initializing NVMe Controllers 00:33:48.449 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.449 Controller IO queue size 128, less than required. 00:33:48.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:48.449 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:48.449 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:48.449 Initialization complete. Launching workers. 00:33:48.449 ======================================================== 00:33:48.449 Latency(us) 00:33:48.449 Device Information : IOPS MiB/s Average min max 00:33:48.449 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006060.81 1000198.19 1016971.28 00:33:48.449 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004188.49 1000152.99 1014357.75 00:33:48.449 ======================================================== 00:33:48.449 Total : 256.00 0.12 1005124.65 1000152.99 1016971.28 00:33:48.449 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122090 00:33:48.731 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (122090) - No such process 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 122090 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.731 rmmod nvme_tcp 00:33:48.731 rmmod nvme_fabrics 00:33:48.731 rmmod nvme_keyring 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 121998 ']' 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 121998 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 121998 ']' 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 121998 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121998 00:33:48.731 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:48.732 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:48.732 killing process with pid 121998 00:33:48.732 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121998' 00:33:48.732 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 121998 00:33:48.732 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 121998 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:48.990 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:33:49.248 00:33:49.248 real 0m9.788s 00:33:49.248 user 0m24.767s 00:33:49.248 sys 0m2.133s 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:49.248 ************************************ 00:33:49.248 END TEST nvmf_delete_subsystem 00:33:49.248 ************************************ 00:33:49.248 17:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:49.248 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:49.248 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:49.248 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:49.248 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:49.248 ************************************ 00:33:49.248 START TEST nvmf_host_management 00:33:49.248 ************************************ 00:33:49.248 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:49.507 * Looking for test storage... 00:33:49.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.507 --rc genhtml_branch_coverage=1 00:33:49.507 --rc genhtml_function_coverage=1 00:33:49.507 --rc genhtml_legend=1 00:33:49.507 --rc geninfo_all_blocks=1 00:33:49.507 --rc geninfo_unexecuted_blocks=1 00:33:49.507 00:33:49.507 ' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.507 --rc genhtml_branch_coverage=1 00:33:49.507 --rc genhtml_function_coverage=1 00:33:49.507 --rc genhtml_legend=1 00:33:49.507 --rc geninfo_all_blocks=1 00:33:49.507 --rc geninfo_unexecuted_blocks=1 00:33:49.507 00:33:49.507 ' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.507 --rc genhtml_branch_coverage=1 00:33:49.507 --rc genhtml_function_coverage=1 00:33:49.507 --rc genhtml_legend=1 00:33:49.507 --rc geninfo_all_blocks=1 00:33:49.507 --rc geninfo_unexecuted_blocks=1 00:33:49.507 00:33:49.507 ' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:49.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.507 --rc genhtml_branch_coverage=1 00:33:49.507 --rc genhtml_function_coverage=1 00:33:49.507 --rc genhtml_legend=1 00:33:49.507 --rc geninfo_all_blocks=1 00:33:49.507 --rc geninfo_unexecuted_blocks=1 00:33:49.507 00:33:49.507 ' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.507 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:49.508 Cannot find device "nvmf_init_br" 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:49.508 Cannot find device "nvmf_init_br2" 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:49.508 Cannot find device "nvmf_tgt_br" 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:33:49.508 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:49.508 Cannot find device "nvmf_tgt_br2" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:49.767 Cannot find device "nvmf_init_br" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:49.767 Cannot find device "nvmf_init_br2" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:49.767 Cannot find device "nvmf_tgt_br" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:49.767 Cannot find device "nvmf_tgt_br2" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:49.767 Cannot find device "nvmf_br" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:49.767 Cannot find device "nvmf_init_if" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:49.767 Cannot find device "nvmf_init_if2" 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:49.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:49.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:49.767 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:49.768 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:49.768 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:49.768 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:49.768 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:49.768 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:49.768 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:49.768 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:50.027 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:50.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:50.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:33:50.028 00:33:50.028 --- 10.0.0.3 ping statistics --- 00:33:50.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.028 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:50.028 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:50.028 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:33:50.028 00:33:50.028 --- 10.0.0.4 ping statistics --- 00:33:50.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.028 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:50.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:33:50.028 00:33:50.028 --- 10.0.0.1 ping statistics --- 00:33:50.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.028 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:50.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:33:50.028 00:33:50.028 --- 10.0.0.2 ping statistics --- 00:33:50.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.028 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=122375 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 122375 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 122375 ']' 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:50.028 17:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.028 [2024-11-20 17:54:31.750620] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:50.028 [2024-11-20 17:54:31.751731] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:50.028 [2024-11-20 17:54:31.751804] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.287 [2024-11-20 17:54:31.888204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:50.287 [2024-11-20 17:54:31.989485] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.287 [2024-11-20 17:54:31.989547] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.287 [2024-11-20 17:54:31.989559] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.287 [2024-11-20 17:54:31.989574] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.287 [2024-11-20 17:54:31.989581] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.287 [2024-11-20 17:54:31.989779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.287 [2024-11-20 17:54:31.989941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:50.287 [2024-11-20 17:54:31.990118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:33:50.287 [2024-11-20 17:54:31.990125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.545 [2024-11-20 17:54:32.090587] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:50.545 [2024-11-20 17:54:32.091093] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:50.545 [2024-11-20 17:54:32.091193] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:50.545 [2024-11-20 17:54:32.091296] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:50.545 [2024-11-20 17:54:32.091421] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:50.545 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.546 [2024-11-20 17:54:32.183629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.546 Malloc0 00:33:50.546 [2024-11-20 17:54:32.259843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=122428 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 122428 /var/tmp/bdevperf.sock 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 122428 ']' 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:50.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:50.546 { 00:33:50.546 "params": { 00:33:50.546 "name": "Nvme$subsystem", 00:33:50.546 "trtype": "$TEST_TRANSPORT", 00:33:50.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.546 "adrfam": "ipv4", 00:33:50.546 "trsvcid": "$NVMF_PORT", 00:33:50.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.546 "hdgst": ${hdgst:-false}, 00:33:50.546 "ddgst": ${ddgst:-false} 00:33:50.546 }, 00:33:50.546 "method": "bdev_nvme_attach_controller" 00:33:50.546 } 00:33:50.546 EOF 00:33:50.546 )") 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:33:50.546 17:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:50.546 "params": { 00:33:50.546 "name": "Nvme0", 00:33:50.546 "trtype": "tcp", 00:33:50.546 "traddr": "10.0.0.3", 00:33:50.546 "adrfam": "ipv4", 00:33:50.546 "trsvcid": "4420", 00:33:50.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:50.546 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:50.546 "hdgst": false, 00:33:50.546 "ddgst": false 00:33:50.546 }, 00:33:50.546 "method": "bdev_nvme_attach_controller" 00:33:50.546 }' 00:33:50.804 [2024-11-20 17:54:32.396921] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:50.804 [2024-11-20 17:54:32.397077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122428 ] 00:33:50.804 [2024-11-20 17:54:32.540803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.062 [2024-11-20 17:54:32.648265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.062 Running I/O for 10 seconds... 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.000 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.000 [2024-11-20 17:54:33.532204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.000 [2024-11-20 17:54:33.532761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.000 [2024-11-20 17:54:33.532771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.532988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.532998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.001 [2024-11-20 17:54:33.533609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.001 [2024-11-20 17:54:33.533619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.002 [2024-11-20 17:54:33.533642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.002 [2024-11-20 17:54:33.533657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.002 [2024-11-20 17:54:33.533673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.002 [2024-11-20 17:54:33.533683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.002 [2024-11-20 17:54:33.533695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.002 [2024-11-20 17:54:33.533704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.002 [2024-11-20 17:54:33.533793] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x150e060 was disconnected and freed. reset controller. 00:33:52.002 [2024-11-20 17:54:33.534932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:52.002 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.002 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:52.002 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.002 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.002 task offset: 126848 on job bdev=Nvme0n1 fails 00:33:52.002 00:33:52.002 Latency(us) 00:33:52.002 [2024-11-20T17:54:33.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:52.002 Job: Nvme0n1 ended in about 0.71 seconds with error 00:33:52.002 Verification LBA range: start 0x0 length 0x400 00:33:52.002 Nvme0n1 : 0.71 1360.11 85.01 90.67 0.00 43053.40 2606.55 40513.16 00:33:52.002 [2024-11-20T17:54:33.790Z] =================================================================================================================== 00:33:52.002 [2024-11-20T17:54:33.790Z] Total : 1360.11 85.01 90.67 0.00 43053.40 2606.55 40513.16 00:33:52.002 [2024-11-20 17:54:33.536950] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:52.002 [2024-11-20 17:54:33.536980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129c530 (9): Bad file descriptor 00:33:52.002 [2024-11-20 17:54:33.538048] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:52.002 [2024-11-20 17:54:33.538218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:52.002 [2024-11-20 17:54:33.538242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.002 [2024-11-20 17:54:33.538261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:52.002 [2024-11-20 17:54:33.538273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:52.002 [2024-11-20 17:54:33.538282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.002 [2024-11-20 17:54:33.538291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x129c530 00:33:52.002 [2024-11-20 17:54:33.538329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129c530 (9): Bad file descriptor 00:33:52.002 [2024-11-20 17:54:33.538347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:52.002 [2024-11-20 17:54:33.538357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:52.002 [2024-11-20 17:54:33.538368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:52.002 [2024-11-20 17:54:33.538385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.002 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.002 17:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:52.950 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 122428 00:33:52.951 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (122428) - No such process 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:52.951 { 00:33:52.951 "params": { 00:33:52.951 "name": "Nvme$subsystem", 00:33:52.951 "trtype": "$TEST_TRANSPORT", 00:33:52.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.951 "adrfam": "ipv4", 00:33:52.951 "trsvcid": "$NVMF_PORT", 00:33:52.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.951 "hdgst": ${hdgst:-false}, 00:33:52.951 "ddgst": ${ddgst:-false} 00:33:52.951 }, 00:33:52.951 "method": "bdev_nvme_attach_controller" 00:33:52.951 } 00:33:52.951 EOF 00:33:52.951 )") 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:33:52.951 17:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:52.951 "params": { 00:33:52.951 "name": "Nvme0", 00:33:52.951 "trtype": "tcp", 00:33:52.951 "traddr": "10.0.0.3", 00:33:52.951 "adrfam": "ipv4", 00:33:52.951 "trsvcid": "4420", 00:33:52.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.951 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.951 "hdgst": false, 00:33:52.951 "ddgst": false 00:33:52.951 }, 00:33:52.951 "method": "bdev_nvme_attach_controller" 00:33:52.951 }' 00:33:52.951 [2024-11-20 17:54:34.616094] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:52.951 [2024-11-20 17:54:34.616202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122478 ] 00:33:53.210 [2024-11-20 17:54:34.756882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.210 [2024-11-20 17:54:34.861516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.469 Running I/O for 1 seconds... 00:33:54.405 1373.00 IOPS, 85.81 MiB/s 00:33:54.405 Latency(us) 00:33:54.405 [2024-11-20T17:54:36.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.405 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:54.405 Verification LBA range: start 0x0 length 0x400 00:33:54.405 Nvme0n1 : 1.04 1416.47 88.53 0.00 0.00 44299.89 7298.33 41228.10 00:33:54.405 [2024-11-20T17:54:36.193Z] =================================================================================================================== 00:33:54.405 [2024-11-20T17:54:36.193Z] Total : 1416.47 88.53 0.00 0.00 44299.89 7298.33 41228.10 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.664 rmmod nvme_tcp 00:33:54.664 rmmod nvme_fabrics 00:33:54.664 rmmod nvme_keyring 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 122375 ']' 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 122375 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 122375 ']' 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 122375 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:54.664 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122375 00:33:54.923 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:54.923 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:54.923 killing process with pid 122375 00:33:54.923 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122375' 00:33:54.923 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 122375 00:33:54.923 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 122375 00:33:55.182 [2024-11-20 17:54:36.788877] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:55.182 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:55.440 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:55.440 17:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:55.440 00:33:55.440 real 0m6.061s 00:33:55.440 user 0m19.741s 00:33:55.440 sys 0m2.377s 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:55.440 ************************************ 00:33:55.440 END TEST nvmf_host_management 00:33:55.440 ************************************ 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:55.440 ************************************ 00:33:55.440 START TEST nvmf_lvol 00:33:55.440 ************************************ 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:55.440 * Looking for test storage... 00:33:55.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:33:55.440 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:55.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.699 --rc genhtml_branch_coverage=1 00:33:55.699 --rc genhtml_function_coverage=1 00:33:55.699 --rc genhtml_legend=1 00:33:55.699 --rc geninfo_all_blocks=1 00:33:55.699 --rc geninfo_unexecuted_blocks=1 00:33:55.699 00:33:55.699 ' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:55.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.699 --rc genhtml_branch_coverage=1 00:33:55.699 --rc genhtml_function_coverage=1 00:33:55.699 --rc genhtml_legend=1 00:33:55.699 --rc geninfo_all_blocks=1 00:33:55.699 --rc geninfo_unexecuted_blocks=1 00:33:55.699 00:33:55.699 ' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:55.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.699 --rc genhtml_branch_coverage=1 00:33:55.699 --rc genhtml_function_coverage=1 00:33:55.699 --rc genhtml_legend=1 00:33:55.699 --rc geninfo_all_blocks=1 00:33:55.699 --rc geninfo_unexecuted_blocks=1 00:33:55.699 00:33:55.699 ' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:55.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.699 --rc genhtml_branch_coverage=1 00:33:55.699 --rc genhtml_function_coverage=1 00:33:55.699 --rc genhtml_legend=1 00:33:55.699 --rc geninfo_all_blocks=1 00:33:55.699 --rc geninfo_unexecuted_blocks=1 00:33:55.699 00:33:55.699 ' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.699 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:55.700 Cannot find device "nvmf_init_br" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:55.700 Cannot find device "nvmf_init_br2" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:55.700 Cannot find device "nvmf_tgt_br" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:55.700 Cannot find device "nvmf_tgt_br2" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:55.700 Cannot find device "nvmf_init_br" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:55.700 Cannot find device "nvmf_init_br2" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:55.700 Cannot find device "nvmf_tgt_br" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:55.700 Cannot find device "nvmf_tgt_br2" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:55.700 Cannot find device "nvmf_br" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:55.700 Cannot find device "nvmf_init_if" 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:33:55.700 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:55.958 Cannot find device "nvmf_init_if2" 00:33:55.958 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:33:55.958 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:55.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:55.958 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:33:55.958 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:55.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:55.958 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:55.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:55.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:33:55.959 00:33:55.959 --- 10.0.0.3 ping statistics --- 00:33:55.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.959 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:55.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:55.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:33:55.959 00:33:55.959 --- 10.0.0.4 ping statistics --- 00:33:55.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.959 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:55.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:33:55.959 00:33:55.959 --- 10.0.0.1 ping statistics --- 00:33:55.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.959 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:55.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:33:55.959 00:33:55.959 --- 10.0.0.2 ping statistics --- 00:33:55.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.959 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:55.959 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=122744 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 122744 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 122744 ']' 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:56.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:56.218 17:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:56.218 [2024-11-20 17:54:37.847374] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:56.218 [2024-11-20 17:54:37.849414] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:56.218 [2024-11-20 17:54:37.849547] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.476 [2024-11-20 17:54:38.006972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:56.476 [2024-11-20 17:54:38.140580] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.476 [2024-11-20 17:54:38.140697] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.476 [2024-11-20 17:54:38.140724] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.476 [2024-11-20 17:54:38.140734] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.476 [2024-11-20 17:54:38.140744] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.476 [2024-11-20 17:54:38.140877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.476 [2024-11-20 17:54:38.141653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.476 [2024-11-20 17:54:38.141706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.734 [2024-11-20 17:54:38.269532] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:56.734 [2024-11-20 17:54:38.269665] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:56.734 [2024-11-20 17:54:38.270002] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:56.734 [2024-11-20 17:54:38.280431] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:57.301 17:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:57.301 17:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:33:57.301 17:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:57.301 17:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:57.301 17:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:57.301 17:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.301 17:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:57.559 [2024-11-20 17:54:39.294752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.559 17:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:58.128 17:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:58.128 17:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:58.388 17:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:58.388 17:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:58.956 17:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:59.214 17:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bafba98c-eec6-491f-913d-b2c16f8c3f6f 00:33:59.214 17:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bafba98c-eec6-491f-913d-b2c16f8c3f6f lvol 20 00:33:59.473 17:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1dee1f0f-d72b-4e8d-a40b-bae870115f38 00:33:59.473 17:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:59.731 17:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1dee1f0f-d72b-4e8d-a40b-bae870115f38 00:33:59.990 17:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:00.248 [2024-11-20 17:54:41.994868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:00.248 17:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:00.814 17:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=122897 00:34:00.814 17:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:00.814 17:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:01.748 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1dee1f0f-d72b-4e8d-a40b-bae870115f38 MY_SNAPSHOT 00:34:02.007 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2a992a1f-8dfd-4844-9cb4-081973a5ed68 00:34:02.007 17:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1dee1f0f-d72b-4e8d-a40b-bae870115f38 30 00:34:02.574 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2a992a1f-8dfd-4844-9cb4-081973a5ed68 MY_CLONE 00:34:02.833 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=50255afa-cd0d-4bf3-95d5-03b80d691c0c 00:34:02.833 17:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 50255afa-cd0d-4bf3-95d5-03b80d691c0c 00:34:04.207 17:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 122897 00:34:12.319 Initializing NVMe Controllers 00:34:12.319 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:34:12.319 Controller IO queue size 128, less than required. 00:34:12.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:12.319 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:12.319 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:12.319 Initialization complete. Launching workers. 00:34:12.319 ======================================================== 00:34:12.319 Latency(us) 00:34:12.319 Device Information : IOPS MiB/s Average min max 00:34:12.319 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6803.30 26.58 18824.94 3073.43 105936.74 00:34:12.319 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7024.00 27.44 18228.60 3181.59 105962.01 00:34:12.319 ======================================================== 00:34:12.319 Total : 13827.30 54.01 18522.01 3073.43 105962.01 00:34:12.319 00:34:12.319 17:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1dee1f0f-d72b-4e8d-a40b-bae870115f38 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bafba98c-eec6-491f-913d-b2c16f8c3f6f 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:12.319 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.320 rmmod nvme_tcp 00:34:12.320 rmmod nvme_fabrics 00:34:12.320 rmmod nvme_keyring 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 122744 ']' 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 122744 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 122744 ']' 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 122744 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122744 00:34:12.320 killing process with pid 122744 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122744' 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 122744 00:34:12.320 17:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 122744 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:12.320 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:34:12.579 00:34:12.579 real 0m17.187s 00:34:12.579 user 0m58.133s 00:34:12.579 sys 0m5.969s 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:12.579 ************************************ 00:34:12.579 END TEST nvmf_lvol 00:34:12.579 ************************************ 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:12.579 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:12.838 ************************************ 00:34:12.838 START TEST nvmf_lvs_grow 00:34:12.838 ************************************ 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:12.838 * Looking for test storage... 00:34:12.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:12.838 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.839 --rc genhtml_branch_coverage=1 00:34:12.839 --rc genhtml_function_coverage=1 00:34:12.839 --rc genhtml_legend=1 00:34:12.839 --rc geninfo_all_blocks=1 00:34:12.839 --rc geninfo_unexecuted_blocks=1 00:34:12.839 00:34:12.839 ' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.839 --rc genhtml_branch_coverage=1 00:34:12.839 --rc genhtml_function_coverage=1 00:34:12.839 --rc genhtml_legend=1 00:34:12.839 --rc geninfo_all_blocks=1 00:34:12.839 --rc geninfo_unexecuted_blocks=1 00:34:12.839 00:34:12.839 ' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.839 --rc genhtml_branch_coverage=1 00:34:12.839 --rc genhtml_function_coverage=1 00:34:12.839 --rc genhtml_legend=1 00:34:12.839 --rc geninfo_all_blocks=1 00:34:12.839 --rc geninfo_unexecuted_blocks=1 00:34:12.839 00:34:12.839 ' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.839 --rc genhtml_branch_coverage=1 00:34:12.839 --rc genhtml_function_coverage=1 00:34:12.839 --rc genhtml_legend=1 00:34:12.839 --rc geninfo_all_blocks=1 00:34:12.839 --rc geninfo_unexecuted_blocks=1 00:34:12.839 00:34:12.839 ' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:12.839 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:12.840 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:13.098 Cannot find device "nvmf_init_br" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:13.098 Cannot find device "nvmf_init_br2" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:13.098 Cannot find device "nvmf_tgt_br" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:13.098 Cannot find device "nvmf_tgt_br2" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:13.098 Cannot find device "nvmf_init_br" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:13.098 Cannot find device "nvmf_init_br2" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:13.098 Cannot find device "nvmf_tgt_br" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:13.098 Cannot find device "nvmf_tgt_br2" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:13.098 Cannot find device "nvmf_br" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:13.098 Cannot find device "nvmf_init_if" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:13.098 Cannot find device "nvmf_init_if2" 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:13.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:13.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:13.098 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:13.358 17:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:13.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:13.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:34:13.358 00:34:13.358 --- 10.0.0.3 ping statistics --- 00:34:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.358 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:13.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:13.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:34:13.358 00:34:13.358 --- 10.0.0.4 ping statistics --- 00:34:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.358 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:13.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:13.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:34:13.358 00:34:13.358 --- 10.0.0.1 ping statistics --- 00:34:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.358 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:13.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:34:13.358 00:34:13.358 --- 10.0.0.2 ping statistics --- 00:34:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.358 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=123308 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 123308 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 123308 ']' 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:13.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:13.358 17:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.358 [2024-11-20 17:54:55.123685] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:13.358 [2024-11-20 17:54:55.125011] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:13.358 [2024-11-20 17:54:55.125732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:13.616 [2024-11-20 17:54:55.271342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.616 [2024-11-20 17:54:55.394879] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.617 [2024-11-20 17:54:55.394939] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:13.617 [2024-11-20 17:54:55.394950] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.617 [2024-11-20 17:54:55.394959] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.617 [2024-11-20 17:54:55.394968] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.617 [2024-11-20 17:54:55.395001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.875 [2024-11-20 17:54:55.513484] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:13.875 [2024-11-20 17:54:55.513862] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:14.443 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:14.443 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:34:14.443 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:14.443 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:14.443 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:14.701 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.701 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:14.960 [2024-11-20 17:54:56.508008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:14.960 ************************************ 00:34:14.960 START TEST lvs_grow_clean 00:34:14.960 ************************************ 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:14.960 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:15.219 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:15.219 17:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:15.478 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=309025cf-0c26-43e1-ab99-120ca14ef938 00:34:15.478 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:15.478 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:15.737 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:15.737 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:15.737 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 309025cf-0c26-43e1-ab99-120ca14ef938 lvol 150 00:34:16.305 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9c73f150-0bf0-409a-a58c-6333b2fcc9b6 00:34:16.305 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:16.305 17:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:16.562 [2024-11-20 17:54:58.115696] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:16.562 [2024-11-20 17:54:58.115850] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:16.562 true 00:34:16.562 17:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:16.562 17:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:16.820 17:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:16.820 17:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:17.385 17:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c73f150-0bf0-409a-a58c-6333b2fcc9b6 00:34:17.642 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:17.901 [2024-11-20 17:54:59.584318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:17.901 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=123479 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 123479 /var/tmp/bdevperf.sock 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 123479 ']' 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:18.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:18.466 17:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:18.466 [2024-11-20 17:55:00.041429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:18.467 [2024-11-20 17:55:00.041536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123479 ] 00:34:18.467 [2024-11-20 17:55:00.180811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.727 [2024-11-20 17:55:00.315386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.727 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:18.727 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:34:18.727 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:19.302 Nvme0n1 00:34:19.302 17:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:19.560 [ 00:34:19.560 { 00:34:19.560 "aliases": [ 00:34:19.560 "9c73f150-0bf0-409a-a58c-6333b2fcc9b6" 00:34:19.560 ], 00:34:19.560 "assigned_rate_limits": { 00:34:19.561 "r_mbytes_per_sec": 0, 00:34:19.561 "rw_ios_per_sec": 0, 00:34:19.561 "rw_mbytes_per_sec": 0, 00:34:19.561 "w_mbytes_per_sec": 0 00:34:19.561 }, 00:34:19.561 "block_size": 4096, 00:34:19.561 "claimed": false, 00:34:19.561 "driver_specific": { 00:34:19.561 "mp_policy": "active_passive", 00:34:19.561 "nvme": [ 00:34:19.561 { 00:34:19.561 "ctrlr_data": { 00:34:19.561 "ana_reporting": false, 00:34:19.561 "cntlid": 1, 00:34:19.561 "firmware_revision": "24.09.1", 00:34:19.561 "model_number": "SPDK bdev Controller", 00:34:19.561 "multi_ctrlr": true, 00:34:19.561 "oacs": { 00:34:19.561 "firmware": 0, 00:34:19.561 "format": 0, 00:34:19.561 "ns_manage": 0, 00:34:19.561 "security": 0 00:34:19.561 }, 00:34:19.561 "serial_number": "SPDK0", 00:34:19.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.561 "vendor_id": "0x8086" 00:34:19.561 }, 00:34:19.561 "ns_data": { 00:34:19.561 "can_share": true, 00:34:19.561 "id": 1 00:34:19.561 }, 00:34:19.561 "trid": { 00:34:19.561 "adrfam": "IPv4", 00:34:19.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.561 "traddr": "10.0.0.3", 00:34:19.561 "trsvcid": "4420", 00:34:19.561 "trtype": "TCP" 00:34:19.561 }, 00:34:19.561 "vs": { 00:34:19.561 "nvme_version": "1.3" 00:34:19.561 } 00:34:19.561 } 00:34:19.561 ] 00:34:19.561 }, 00:34:19.561 "memory_domains": [ 00:34:19.561 { 00:34:19.561 "dma_device_id": "system", 00:34:19.561 "dma_device_type": 1 00:34:19.561 } 00:34:19.561 ], 00:34:19.561 "name": "Nvme0n1", 00:34:19.561 "num_blocks": 38912, 00:34:19.561 "numa_id": -1, 00:34:19.561 "product_name": "NVMe disk", 00:34:19.561 "supported_io_types": { 00:34:19.561 "abort": true, 00:34:19.561 "compare": true, 00:34:19.561 "compare_and_write": true, 00:34:19.561 "copy": true, 00:34:19.561 "flush": true, 00:34:19.561 "get_zone_info": false, 00:34:19.561 "nvme_admin": true, 00:34:19.561 "nvme_io": true, 00:34:19.561 "nvme_io_md": false, 00:34:19.561 "nvme_iov_md": false, 00:34:19.561 "read": true, 00:34:19.561 "reset": true, 00:34:19.561 "seek_data": false, 00:34:19.561 "seek_hole": false, 00:34:19.561 "unmap": true, 00:34:19.561 "write": true, 00:34:19.561 "write_zeroes": true, 00:34:19.561 "zcopy": false, 00:34:19.561 "zone_append": false, 00:34:19.561 "zone_management": false 00:34:19.561 }, 00:34:19.561 "uuid": "9c73f150-0bf0-409a-a58c-6333b2fcc9b6", 00:34:19.561 "zoned": false 00:34:19.561 } 00:34:19.561 ] 00:34:19.561 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=123510 00:34:19.561 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:19.561 17:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:19.818 Running I/O for 10 seconds... 00:34:20.750 Latency(us) 00:34:20.750 [2024-11-20T17:55:02.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.750 Nvme0n1 : 1.00 6126.00 23.93 0.00 0.00 0.00 0.00 0.00 00:34:20.750 [2024-11-20T17:55:02.538Z] =================================================================================================================== 00:34:20.750 [2024-11-20T17:55:02.538Z] Total : 6126.00 23.93 0.00 0.00 0.00 0.00 0.00 00:34:20.750 00:34:21.686 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:21.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.686 Nvme0n1 : 2.00 6554.50 25.60 0.00 0.00 0.00 0.00 0.00 00:34:21.686 [2024-11-20T17:55:03.474Z] =================================================================================================================== 00:34:21.686 [2024-11-20T17:55:03.474Z] Total : 6554.50 25.60 0.00 0.00 0.00 0.00 0.00 00:34:21.686 00:34:21.945 true 00:34:21.945 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:21.945 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:22.203 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:22.203 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:22.203 17:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 123510 00:34:22.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.770 Nvme0n1 : 3.00 6543.33 25.56 0.00 0.00 0.00 0.00 0.00 00:34:22.770 [2024-11-20T17:55:04.558Z] =================================================================================================================== 00:34:22.770 [2024-11-20T17:55:04.558Z] Total : 6543.33 25.56 0.00 0.00 0.00 0.00 0.00 00:34:22.770 00:34:23.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.792 Nvme0n1 : 4.00 6633.50 25.91 0.00 0.00 0.00 0.00 0.00 00:34:23.792 [2024-11-20T17:55:05.580Z] =================================================================================================================== 00:34:23.792 [2024-11-20T17:55:05.580Z] Total : 6633.50 25.91 0.00 0.00 0.00 0.00 0.00 00:34:23.792 00:34:24.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.739 Nvme0n1 : 5.00 6682.60 26.10 0.00 0.00 0.00 0.00 0.00 00:34:24.739 [2024-11-20T17:55:06.527Z] =================================================================================================================== 00:34:24.739 [2024-11-20T17:55:06.527Z] Total : 6682.60 26.10 0.00 0.00 0.00 0.00 0.00 00:34:24.739 00:34:25.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.685 Nvme0n1 : 6.00 6698.33 26.17 0.00 0.00 0.00 0.00 0.00 00:34:25.685 [2024-11-20T17:55:07.473Z] =================================================================================================================== 00:34:25.685 [2024-11-20T17:55:07.473Z] Total : 6698.33 26.17 0.00 0.00 0.00 0.00 0.00 00:34:25.685 00:34:26.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:26.619 Nvme0n1 : 7.00 6742.57 26.34 0.00 0.00 0.00 0.00 0.00 00:34:26.619 [2024-11-20T17:55:08.407Z] =================================================================================================================== 00:34:26.619 [2024-11-20T17:55:08.407Z] Total : 6742.57 26.34 0.00 0.00 0.00 0.00 0.00 00:34:26.619 00:34:27.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.992 Nvme0n1 : 8.00 6777.62 26.48 0.00 0.00 0.00 0.00 0.00 00:34:27.992 [2024-11-20T17:55:09.780Z] =================================================================================================================== 00:34:27.992 [2024-11-20T17:55:09.780Z] Total : 6777.62 26.48 0.00 0.00 0.00 0.00 0.00 00:34:27.992 00:34:28.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:28.639 Nvme0n1 : 9.00 6792.56 26.53 0.00 0.00 0.00 0.00 0.00 00:34:28.639 [2024-11-20T17:55:10.427Z] =================================================================================================================== 00:34:28.639 [2024-11-20T17:55:10.427Z] Total : 6792.56 26.53 0.00 0.00 0.00 0.00 0.00 00:34:28.639 00:34:30.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:30.028 Nvme0n1 : 10.00 6770.90 26.45 0.00 0.00 0.00 0.00 0.00 00:34:30.028 [2024-11-20T17:55:11.816Z] =================================================================================================================== 00:34:30.028 [2024-11-20T17:55:11.816Z] Total : 6770.90 26.45 0.00 0.00 0.00 0.00 0.00 00:34:30.028 00:34:30.028 00:34:30.028 Latency(us) 00:34:30.028 [2024-11-20T17:55:11.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:30.028 Nvme0n1 : 10.02 6771.20 26.45 0.00 0.00 18885.01 4468.36 29312.47 00:34:30.028 [2024-11-20T17:55:11.816Z] =================================================================================================================== 00:34:30.028 [2024-11-20T17:55:11.816Z] Total : 6771.20 26.45 0.00 0.00 18885.01 4468.36 29312.47 00:34:30.028 { 00:34:30.028 "results": [ 00:34:30.028 { 00:34:30.028 "job": "Nvme0n1", 00:34:30.028 "core_mask": "0x2", 00:34:30.028 "workload": "randwrite", 00:34:30.028 "status": "finished", 00:34:30.028 "queue_depth": 128, 00:34:30.028 "io_size": 4096, 00:34:30.028 "runtime": 10.01846, 00:34:30.028 "iops": 6771.20036412782, 00:34:30.028 "mibps": 26.450001422374296, 00:34:30.028 "io_failed": 0, 00:34:30.028 "io_timeout": 0, 00:34:30.028 "avg_latency_us": 18885.014777508117, 00:34:30.028 "min_latency_us": 4468.363636363636, 00:34:30.028 "max_latency_us": 29312.465454545454 00:34:30.028 } 00:34:30.028 ], 00:34:30.028 "core_count": 1 00:34:30.028 } 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 123479 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 123479 ']' 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 123479 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123479 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123479' 00:34:30.028 killing process with pid 123479 00:34:30.028 Received shutdown signal, test time was about 10.000000 seconds 00:34:30.028 00:34:30.028 Latency(us) 00:34:30.028 [2024-11-20T17:55:11.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.028 [2024-11-20T17:55:11.816Z] =================================================================================================================== 00:34:30.028 [2024-11-20T17:55:11.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 123479 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 123479 00:34:30.028 17:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:30.286 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:30.852 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:30.852 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:31.110 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:31.110 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:31.111 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:31.369 [2024-11-20 17:55:12.951804] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:31.369 17:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:31.627 2024/11/20 17:55:13 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:309025cf-0c26-43e1-ab99-120ca14ef938], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:34:31.627 request: 00:34:31.627 { 00:34:31.627 "method": "bdev_lvol_get_lvstores", 00:34:31.627 "params": { 00:34:31.627 "uuid": "309025cf-0c26-43e1-ab99-120ca14ef938" 00:34:31.627 } 00:34:31.627 } 00:34:31.627 Got JSON-RPC error response 00:34:31.627 GoRPCClient: error on JSON-RPC call 00:34:31.627 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:34:31.627 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:31.627 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:31.627 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:31.627 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:31.885 aio_bdev 00:34:31.885 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9c73f150-0bf0-409a-a58c-6333b2fcc9b6 00:34:31.885 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9c73f150-0bf0-409a-a58c-6333b2fcc9b6 00:34:31.885 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:31.885 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:34:31.885 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:31.885 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:31.885 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:32.451 17:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c73f150-0bf0-409a-a58c-6333b2fcc9b6 -t 2000 00:34:32.709 [ 00:34:32.709 { 00:34:32.709 "aliases": [ 00:34:32.709 "lvs/lvol" 00:34:32.709 ], 00:34:32.709 "assigned_rate_limits": { 00:34:32.709 "r_mbytes_per_sec": 0, 00:34:32.709 "rw_ios_per_sec": 0, 00:34:32.709 "rw_mbytes_per_sec": 0, 00:34:32.709 "w_mbytes_per_sec": 0 00:34:32.709 }, 00:34:32.709 "block_size": 4096, 00:34:32.709 "claimed": false, 00:34:32.709 "driver_specific": { 00:34:32.709 "lvol": { 00:34:32.709 "base_bdev": "aio_bdev", 00:34:32.709 "clone": false, 00:34:32.709 "esnap_clone": false, 00:34:32.709 "lvol_store_uuid": "309025cf-0c26-43e1-ab99-120ca14ef938", 00:34:32.709 "num_allocated_clusters": 38, 00:34:32.709 "snapshot": false, 00:34:32.709 "thin_provision": false 00:34:32.709 } 00:34:32.709 }, 00:34:32.709 "name": "9c73f150-0bf0-409a-a58c-6333b2fcc9b6", 00:34:32.709 "num_blocks": 38912, 00:34:32.709 "product_name": "Logical Volume", 00:34:32.709 "supported_io_types": { 00:34:32.709 "abort": false, 00:34:32.709 "compare": false, 00:34:32.709 "compare_and_write": false, 00:34:32.709 "copy": false, 00:34:32.709 "flush": false, 00:34:32.709 "get_zone_info": false, 00:34:32.709 "nvme_admin": false, 00:34:32.709 "nvme_io": false, 00:34:32.709 "nvme_io_md": false, 00:34:32.709 "nvme_iov_md": false, 00:34:32.709 "read": true, 00:34:32.709 "reset": true, 00:34:32.709 "seek_data": true, 00:34:32.709 "seek_hole": true, 00:34:32.709 "unmap": true, 00:34:32.709 "write": true, 00:34:32.709 "write_zeroes": true, 00:34:32.709 "zcopy": false, 00:34:32.709 "zone_append": false, 00:34:32.709 "zone_management": false 00:34:32.709 }, 00:34:32.709 "uuid": "9c73f150-0bf0-409a-a58c-6333b2fcc9b6", 00:34:32.709 "zoned": false 00:34:32.709 } 00:34:32.709 ] 00:34:32.709 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:34:32.709 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:32.709 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:32.967 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:32.967 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:32.967 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:33.247 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:33.247 17:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9c73f150-0bf0-409a-a58c-6333b2fcc9b6 00:34:33.506 17:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 309025cf-0c26-43e1-ab99-120ca14ef938 00:34:34.073 17:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:34.332 17:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:34.590 ************************************ 00:34:34.590 END TEST lvs_grow_clean 00:34:34.590 ************************************ 00:34:34.590 00:34:34.590 real 0m19.789s 00:34:34.590 user 0m18.841s 00:34:34.590 sys 0m2.630s 00:34:34.590 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:34.590 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:34.849 ************************************ 00:34:34.849 START TEST lvs_grow_dirty 00:34:34.849 ************************************ 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:34.849 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:35.106 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:35.106 17:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:35.364 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:35.364 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:35.364 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:35.931 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:35.932 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:35.932 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fe0fba60-9d62-4ef4-ba17-44dd330024fe lvol 150 00:34:36.190 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=49cd931b-e233-4a70-a61e-47baa73e7c75 00:34:36.190 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:36.190 17:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:36.449 [2024-11-20 17:55:18.139753] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:36.449 [2024-11-20 17:55:18.139953] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:36.449 true 00:34:36.449 17:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:36.449 17:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:37.016 17:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:37.016 17:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:37.274 17:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49cd931b-e233-4a70-a61e-47baa73e7c75 00:34:37.533 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:37.791 [2024-11-20 17:55:19.384253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:37.791 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=123904 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 123904 /var/tmp/bdevperf.sock 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 123904 ']' 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:38.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:38.049 17:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:38.049 [2024-11-20 17:55:19.757146] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:38.050 [2024-11-20 17:55:19.757262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123904 ] 00:34:38.308 [2024-11-20 17:55:19.902335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.308 [2024-11-20 17:55:20.015482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.243 17:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:39.243 17:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:34:39.243 17:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:39.502 Nvme0n1 00:34:39.502 17:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:39.760 [ 00:34:39.760 { 00:34:39.760 "aliases": [ 00:34:39.760 "49cd931b-e233-4a70-a61e-47baa73e7c75" 00:34:39.761 ], 00:34:39.761 "assigned_rate_limits": { 00:34:39.761 "r_mbytes_per_sec": 0, 00:34:39.761 "rw_ios_per_sec": 0, 00:34:39.761 "rw_mbytes_per_sec": 0, 00:34:39.761 "w_mbytes_per_sec": 0 00:34:39.761 }, 00:34:39.761 "block_size": 4096, 00:34:39.761 "claimed": false, 00:34:39.761 "driver_specific": { 00:34:39.761 "mp_policy": "active_passive", 00:34:39.761 "nvme": [ 00:34:39.761 { 00:34:39.761 "ctrlr_data": { 00:34:39.761 "ana_reporting": false, 00:34:39.761 "cntlid": 1, 00:34:39.761 "firmware_revision": "24.09.1", 00:34:39.761 "model_number": "SPDK bdev Controller", 00:34:39.761 "multi_ctrlr": true, 00:34:39.761 "oacs": { 00:34:39.761 "firmware": 0, 00:34:39.761 "format": 0, 00:34:39.761 "ns_manage": 0, 00:34:39.761 "security": 0 00:34:39.761 }, 00:34:39.761 "serial_number": "SPDK0", 00:34:39.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:39.761 "vendor_id": "0x8086" 00:34:39.761 }, 00:34:39.761 "ns_data": { 00:34:39.761 "can_share": true, 00:34:39.761 "id": 1 00:34:39.761 }, 00:34:39.761 "trid": { 00:34:39.761 "adrfam": "IPv4", 00:34:39.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:39.761 "traddr": "10.0.0.3", 00:34:39.761 "trsvcid": "4420", 00:34:39.761 "trtype": "TCP" 00:34:39.761 }, 00:34:39.761 "vs": { 00:34:39.761 "nvme_version": "1.3" 00:34:39.761 } 00:34:39.761 } 00:34:39.761 ] 00:34:39.761 }, 00:34:39.761 "memory_domains": [ 00:34:39.761 { 00:34:39.761 "dma_device_id": "system", 00:34:39.761 "dma_device_type": 1 00:34:39.761 } 00:34:39.761 ], 00:34:39.761 "name": "Nvme0n1", 00:34:39.761 "num_blocks": 38912, 00:34:39.761 "numa_id": -1, 00:34:39.761 "product_name": "NVMe disk", 00:34:39.761 "supported_io_types": { 00:34:39.761 "abort": true, 00:34:39.761 "compare": true, 00:34:39.761 "compare_and_write": true, 00:34:39.761 "copy": true, 00:34:39.761 "flush": true, 00:34:39.761 "get_zone_info": false, 00:34:39.761 "nvme_admin": true, 00:34:39.761 "nvme_io": true, 00:34:39.761 "nvme_io_md": false, 00:34:39.761 "nvme_iov_md": false, 00:34:39.761 "read": true, 00:34:39.761 "reset": true, 00:34:39.761 "seek_data": false, 00:34:39.761 "seek_hole": false, 00:34:39.761 "unmap": true, 00:34:39.761 "write": true, 00:34:39.761 "write_zeroes": true, 00:34:39.761 "zcopy": false, 00:34:39.761 "zone_append": false, 00:34:39.761 "zone_management": false 00:34:39.761 }, 00:34:39.761 "uuid": "49cd931b-e233-4a70-a61e-47baa73e7c75", 00:34:39.761 "zoned": false 00:34:39.761 } 00:34:39.761 ] 00:34:39.761 17:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=123952 00:34:39.761 17:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:39.761 17:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:39.761 Running I/O for 10 seconds... 00:34:41.137 Latency(us) 00:34:41.137 [2024-11-20T17:55:22.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:41.137 Nvme0n1 : 1.00 6508.00 25.42 0.00 0.00 0.00 0.00 0.00 00:34:41.137 [2024-11-20T17:55:22.925Z] =================================================================================================================== 00:34:41.137 [2024-11-20T17:55:22.925Z] Total : 6508.00 25.42 0.00 0.00 0.00 0.00 0.00 00:34:41.137 00:34:41.705 17:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:41.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:41.964 Nvme0n1 : 2.00 6896.00 26.94 0.00 0.00 0.00 0.00 0.00 00:34:41.964 [2024-11-20T17:55:23.752Z] =================================================================================================================== 00:34:41.964 [2024-11-20T17:55:23.752Z] Total : 6896.00 26.94 0.00 0.00 0.00 0.00 0.00 00:34:41.964 00:34:41.964 true 00:34:42.223 17:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:42.223 17:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:42.482 17:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:42.482 17:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:42.482 17:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 123952 00:34:42.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:42.741 Nvme0n1 : 3.00 6687.00 26.12 0.00 0.00 0.00 0.00 0.00 00:34:42.741 [2024-11-20T17:55:24.529Z] =================================================================================================================== 00:34:42.741 [2024-11-20T17:55:24.529Z] Total : 6687.00 26.12 0.00 0.00 0.00 0.00 0.00 00:34:42.741 00:34:44.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:44.117 Nvme0n1 : 4.00 6848.00 26.75 0.00 0.00 0.00 0.00 0.00 00:34:44.117 [2024-11-20T17:55:25.905Z] =================================================================================================================== 00:34:44.117 [2024-11-20T17:55:25.905Z] Total : 6848.00 26.75 0.00 0.00 0.00 0.00 0.00 00:34:44.117 00:34:45.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:45.054 Nvme0n1 : 5.00 6954.80 27.17 0.00 0.00 0.00 0.00 0.00 00:34:45.054 [2024-11-20T17:55:26.842Z] =================================================================================================================== 00:34:45.054 [2024-11-20T17:55:26.842Z] Total : 6954.80 27.17 0.00 0.00 0.00 0.00 0.00 00:34:45.054 00:34:45.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:45.992 Nvme0n1 : 6.00 7032.00 27.47 0.00 0.00 0.00 0.00 0.00 00:34:45.992 [2024-11-20T17:55:27.780Z] =================================================================================================================== 00:34:45.992 [2024-11-20T17:55:27.780Z] Total : 7032.00 27.47 0.00 0.00 0.00 0.00 0.00 00:34:45.992 00:34:46.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:46.927 Nvme0n1 : 7.00 7087.86 27.69 0.00 0.00 0.00 0.00 0.00 00:34:46.927 [2024-11-20T17:55:28.715Z] =================================================================================================================== 00:34:46.927 [2024-11-20T17:55:28.715Z] Total : 7087.86 27.69 0.00 0.00 0.00 0.00 0.00 00:34:46.927 00:34:47.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:47.861 Nvme0n1 : 8.00 7134.12 27.87 0.00 0.00 0.00 0.00 0.00 00:34:47.861 [2024-11-20T17:55:29.649Z] =================================================================================================================== 00:34:47.861 [2024-11-20T17:55:29.649Z] Total : 7134.12 27.87 0.00 0.00 0.00 0.00 0.00 00:34:47.861 00:34:48.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:48.796 Nvme0n1 : 9.00 7167.11 28.00 0.00 0.00 0.00 0.00 0.00 00:34:48.796 [2024-11-20T17:55:30.584Z] =================================================================================================================== 00:34:48.796 [2024-11-20T17:55:30.584Z] Total : 7167.11 28.00 0.00 0.00 0.00 0.00 0.00 00:34:48.796 00:34:50.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:50.172 Nvme0n1 : 10.00 7197.60 28.12 0.00 0.00 0.00 0.00 0.00 00:34:50.172 [2024-11-20T17:55:31.960Z] =================================================================================================================== 00:34:50.172 [2024-11-20T17:55:31.960Z] Total : 7197.60 28.12 0.00 0.00 0.00 0.00 0.00 00:34:50.172 00:34:50.172 00:34:50.172 Latency(us) 00:34:50.172 [2024-11-20T17:55:31.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:50.172 Nvme0n1 : 10.02 7198.56 28.12 0.00 0.00 17769.97 7983.48 195416.44 00:34:50.172 [2024-11-20T17:55:31.960Z] =================================================================================================================== 00:34:50.172 [2024-11-20T17:55:31.960Z] Total : 7198.56 28.12 0.00 0.00 17769.97 7983.48 195416.44 00:34:50.172 { 00:34:50.172 "results": [ 00:34:50.172 { 00:34:50.172 "job": "Nvme0n1", 00:34:50.172 "core_mask": "0x2", 00:34:50.172 "workload": "randwrite", 00:34:50.172 "status": "finished", 00:34:50.172 "queue_depth": 128, 00:34:50.172 "io_size": 4096, 00:34:50.172 "runtime": 10.016449, 00:34:50.172 "iops": 7198.559090152608, 00:34:50.172 "mibps": 28.119371445908627, 00:34:50.172 "io_failed": 0, 00:34:50.172 "io_timeout": 0, 00:34:50.172 "avg_latency_us": 17769.97397617583, 00:34:50.172 "min_latency_us": 7983.476363636363, 00:34:50.172 "max_latency_us": 195416.43636363637 00:34:50.172 } 00:34:50.172 ], 00:34:50.172 "core_count": 1 00:34:50.172 } 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 123904 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 123904 ']' 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 123904 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123904 00:34:50.172 killing process with pid 123904 00:34:50.172 Received shutdown signal, test time was about 10.000000 seconds 00:34:50.172 00:34:50.172 Latency(us) 00:34:50.172 [2024-11-20T17:55:31.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.172 [2024-11-20T17:55:31.960Z] =================================================================================================================== 00:34:50.172 [2024-11-20T17:55:31.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123904' 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 123904 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 123904 00:34:50.172 17:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:50.431 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:50.997 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:50.997 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 123308 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 123308 00:34:51.255 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 123308 Killed "${NVMF_APP[@]}" "$@" 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=124107 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 124107 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 124107 ']' 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:51.255 17:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:51.255 [2024-11-20 17:55:32.992539] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:51.255 [2024-11-20 17:55:32.994154] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:51.255 [2024-11-20 17:55:32.994254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.513 [2024-11-20 17:55:33.138041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.513 [2024-11-20 17:55:33.220889] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:51.513 [2024-11-20 17:55:33.220955] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:51.513 [2024-11-20 17:55:33.220969] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:51.513 [2024-11-20 17:55:33.220982] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:51.513 [2024-11-20 17:55:33.220991] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:51.513 [2024-11-20 17:55:33.221029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.771 [2024-11-20 17:55:33.314592] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:51.771 [2024-11-20 17:55:33.315023] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:51.771 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:51.771 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:34:51.771 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:51.771 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:51.771 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:51.771 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.771 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:52.029 [2024-11-20 17:55:33.695344] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:52.029 [2024-11-20 17:55:33.695891] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:52.029 [2024-11-20 17:55:33.696205] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 49cd931b-e233-4a70-a61e-47baa73e7c75 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=49cd931b-e233-4a70-a61e-47baa73e7c75 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:52.029 17:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:52.288 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49cd931b-e233-4a70-a61e-47baa73e7c75 -t 2000 00:34:52.547 [ 00:34:52.547 { 00:34:52.547 "aliases": [ 00:34:52.547 "lvs/lvol" 00:34:52.547 ], 00:34:52.547 "assigned_rate_limits": { 00:34:52.547 "r_mbytes_per_sec": 0, 00:34:52.547 "rw_ios_per_sec": 0, 00:34:52.547 "rw_mbytes_per_sec": 0, 00:34:52.547 "w_mbytes_per_sec": 0 00:34:52.547 }, 00:34:52.547 "block_size": 4096, 00:34:52.547 "claimed": false, 00:34:52.547 "driver_specific": { 00:34:52.547 "lvol": { 00:34:52.547 "base_bdev": "aio_bdev", 00:34:52.547 "clone": false, 00:34:52.547 "esnap_clone": false, 00:34:52.547 "lvol_store_uuid": "fe0fba60-9d62-4ef4-ba17-44dd330024fe", 00:34:52.547 "num_allocated_clusters": 38, 00:34:52.547 "snapshot": false, 00:34:52.547 "thin_provision": false 00:34:52.547 } 00:34:52.547 }, 00:34:52.547 "name": "49cd931b-e233-4a70-a61e-47baa73e7c75", 00:34:52.547 "num_blocks": 38912, 00:34:52.547 "product_name": "Logical Volume", 00:34:52.547 "supported_io_types": { 00:34:52.547 "abort": false, 00:34:52.547 "compare": false, 00:34:52.547 "compare_and_write": false, 00:34:52.547 "copy": false, 00:34:52.547 "flush": false, 00:34:52.547 "get_zone_info": false, 00:34:52.547 "nvme_admin": false, 00:34:52.547 "nvme_io": false, 00:34:52.547 "nvme_io_md": false, 00:34:52.547 "nvme_iov_md": false, 00:34:52.547 "read": true, 00:34:52.547 "reset": true, 00:34:52.547 "seek_data": true, 00:34:52.547 "seek_hole": true, 00:34:52.547 "unmap": true, 00:34:52.547 "write": true, 00:34:52.547 "write_zeroes": true, 00:34:52.547 "zcopy": false, 00:34:52.547 "zone_append": false, 00:34:52.547 "zone_management": false 00:34:52.547 }, 00:34:52.547 "uuid": "49cd931b-e233-4a70-a61e-47baa73e7c75", 00:34:52.547 "zoned": false 00:34:52.547 } 00:34:52.547 ] 00:34:52.547 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:34:52.547 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:52.547 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:53.113 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:53.113 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:53.113 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:53.369 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:53.369 17:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:53.628 [2024-11-20 17:55:35.193828] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:53.628 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:53.886 2024/11/20 17:55:35 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fe0fba60-9d62-4ef4-ba17-44dd330024fe], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:34:53.886 request: 00:34:53.886 { 00:34:53.886 "method": "bdev_lvol_get_lvstores", 00:34:53.886 "params": { 00:34:53.886 "uuid": "fe0fba60-9d62-4ef4-ba17-44dd330024fe" 00:34:53.886 } 00:34:53.886 } 00:34:53.886 Got JSON-RPC error response 00:34:53.886 GoRPCClient: error on JSON-RPC call 00:34:53.886 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:34:53.886 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:53.886 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:53.886 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:53.886 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:54.145 aio_bdev 00:34:54.145 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 49cd931b-e233-4a70-a61e-47baa73e7c75 00:34:54.145 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=49cd931b-e233-4a70-a61e-47baa73e7c75 00:34:54.145 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:54.145 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:34:54.145 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:54.145 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:54.145 17:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:54.403 17:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49cd931b-e233-4a70-a61e-47baa73e7c75 -t 2000 00:34:54.661 [ 00:34:54.661 { 00:34:54.661 "aliases": [ 00:34:54.661 "lvs/lvol" 00:34:54.661 ], 00:34:54.661 "assigned_rate_limits": { 00:34:54.661 "r_mbytes_per_sec": 0, 00:34:54.661 "rw_ios_per_sec": 0, 00:34:54.661 "rw_mbytes_per_sec": 0, 00:34:54.661 "w_mbytes_per_sec": 0 00:34:54.661 }, 00:34:54.661 "block_size": 4096, 00:34:54.661 "claimed": false, 00:34:54.661 "driver_specific": { 00:34:54.661 "lvol": { 00:34:54.661 "base_bdev": "aio_bdev", 00:34:54.661 "clone": false, 00:34:54.661 "esnap_clone": false, 00:34:54.661 "lvol_store_uuid": "fe0fba60-9d62-4ef4-ba17-44dd330024fe", 00:34:54.661 "num_allocated_clusters": 38, 00:34:54.661 "snapshot": false, 00:34:54.661 "thin_provision": false 00:34:54.661 } 00:34:54.661 }, 00:34:54.661 "name": "49cd931b-e233-4a70-a61e-47baa73e7c75", 00:34:54.661 "num_blocks": 38912, 00:34:54.661 "product_name": "Logical Volume", 00:34:54.661 "supported_io_types": { 00:34:54.661 "abort": false, 00:34:54.661 "compare": false, 00:34:54.661 "compare_and_write": false, 00:34:54.661 "copy": false, 00:34:54.661 "flush": false, 00:34:54.661 "get_zone_info": false, 00:34:54.661 "nvme_admin": false, 00:34:54.661 "nvme_io": false, 00:34:54.661 "nvme_io_md": false, 00:34:54.661 "nvme_iov_md": false, 00:34:54.661 "read": true, 00:34:54.661 "reset": true, 00:34:54.661 "seek_data": true, 00:34:54.661 "seek_hole": true, 00:34:54.661 "unmap": true, 00:34:54.661 "write": true, 00:34:54.661 "write_zeroes": true, 00:34:54.661 "zcopy": false, 00:34:54.661 "zone_append": false, 00:34:54.661 "zone_management": false 00:34:54.661 }, 00:34:54.661 "uuid": "49cd931b-e233-4a70-a61e-47baa73e7c75", 00:34:54.661 "zoned": false 00:34:54.661 } 00:34:54.661 ] 00:34:54.920 17:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:34:54.920 17:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:54.920 17:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:55.179 17:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:55.179 17:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:55.179 17:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:55.438 17:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:55.438 17:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 49cd931b-e233-4a70-a61e-47baa73e7c75 00:34:55.696 17:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe0fba60-9d62-4ef4-ba17-44dd330024fe 00:34:55.955 17:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:56.213 17:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:56.780 00:34:56.780 real 0m21.906s 00:34:56.780 user 0m29.320s 00:34:56.780 sys 0m8.975s 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:56.780 ************************************ 00:34:56.780 END TEST lvs_grow_dirty 00:34:56.780 ************************************ 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:56.780 nvmf_trace.0 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:56.780 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.039 rmmod nvme_tcp 00:34:57.039 rmmod nvme_fabrics 00:34:57.039 rmmod nvme_keyring 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 124107 ']' 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 124107 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 124107 ']' 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 124107 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124107 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:57.039 killing process with pid 124107 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124107' 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 124107 00:34:57.039 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 124107 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:57.298 17:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:57.298 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:57.298 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:57.298 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:34:57.560 00:34:57.560 real 0m44.771s 00:34:57.560 user 0m49.727s 00:34:57.560 sys 0m12.528s 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:57.560 ************************************ 00:34:57.560 END TEST nvmf_lvs_grow 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:57.560 ************************************ 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:57.560 ************************************ 00:34:57.560 START TEST nvmf_bdev_io_wait 00:34:57.560 ************************************ 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:57.560 * Looking for test storage... 00:34:57.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:34:57.560 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:57.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.820 --rc genhtml_branch_coverage=1 00:34:57.820 --rc genhtml_function_coverage=1 00:34:57.820 --rc genhtml_legend=1 00:34:57.820 --rc geninfo_all_blocks=1 00:34:57.820 --rc geninfo_unexecuted_blocks=1 00:34:57.820 00:34:57.820 ' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:57.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.820 --rc genhtml_branch_coverage=1 00:34:57.820 --rc genhtml_function_coverage=1 00:34:57.820 --rc genhtml_legend=1 00:34:57.820 --rc geninfo_all_blocks=1 00:34:57.820 --rc geninfo_unexecuted_blocks=1 00:34:57.820 00:34:57.820 ' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:57.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.820 --rc genhtml_branch_coverage=1 00:34:57.820 --rc genhtml_function_coverage=1 00:34:57.820 --rc genhtml_legend=1 00:34:57.820 --rc geninfo_all_blocks=1 00:34:57.820 --rc geninfo_unexecuted_blocks=1 00:34:57.820 00:34:57.820 ' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:57.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.820 --rc genhtml_branch_coverage=1 00:34:57.820 --rc genhtml_function_coverage=1 00:34:57.820 --rc genhtml_legend=1 00:34:57.820 --rc geninfo_all_blocks=1 00:34:57.820 --rc geninfo_unexecuted_blocks=1 00:34:57.820 00:34:57.820 ' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.820 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:57.821 Cannot find device "nvmf_init_br" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:57.821 Cannot find device "nvmf_init_br2" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:57.821 Cannot find device "nvmf_tgt_br" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:57.821 Cannot find device "nvmf_tgt_br2" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:57.821 Cannot find device "nvmf_init_br" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:57.821 Cannot find device "nvmf_init_br2" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:57.821 Cannot find device "nvmf_tgt_br" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:57.821 Cannot find device "nvmf_tgt_br2" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:57.821 Cannot find device "nvmf_br" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:57.821 Cannot find device "nvmf_init_if" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:57.821 Cannot find device "nvmf_init_if2" 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:57.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:57.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:34:57.821 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:57.822 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:57.822 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:57.822 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:57.822 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:57.822 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:58.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:58.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:34:58.081 00:34:58.081 --- 10.0.0.3 ping statistics --- 00:34:58.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.081 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:58.081 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:58.081 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:34:58.081 00:34:58.081 --- 10.0.0.4 ping statistics --- 00:34:58.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.081 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:58.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:58.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:34:58.081 00:34:58.081 --- 10.0.0.1 ping statistics --- 00:34:58.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.081 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:58.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:58.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:34:58.081 00:34:58.081 --- 10.0.0.2 ping statistics --- 00:34:58.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.081 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:58.081 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=124569 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 124569 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 124569 ']' 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:58.082 17:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:58.341 [2024-11-20 17:55:39.900111] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:58.341 [2024-11-20 17:55:39.901439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:58.341 [2024-11-20 17:55:39.901527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:58.341 [2024-11-20 17:55:40.047814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:58.600 [2024-11-20 17:55:40.177613] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.600 [2024-11-20 17:55:40.177694] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.600 [2024-11-20 17:55:40.177716] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:58.600 [2024-11-20 17:55:40.177727] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:58.600 [2024-11-20 17:55:40.177736] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.600 [2024-11-20 17:55:40.177914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.600 [2024-11-20 17:55:40.178597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.600 [2024-11-20 17:55:40.178711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:58.600 [2024-11-20 17:55:40.178717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.600 [2024-11-20 17:55:40.179336] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:59.169 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:59.169 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:34:59.169 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:59.169 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.170 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.429 17:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.429 [2024-11-20 17:55:41.087684] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:59.429 [2024-11-20 17:55:41.087938] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:59.430 [2024-11-20 17:55:41.089183] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:59.430 [2024-11-20 17:55:41.089799] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.430 [2024-11-20 17:55:41.099623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.430 Malloc0 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.430 [2024-11-20 17:55:41.191841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=124622 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=124624 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:59.430 { 00:34:59.430 "params": { 00:34:59.430 "name": "Nvme$subsystem", 00:34:59.430 "trtype": "$TEST_TRANSPORT", 00:34:59.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.430 "adrfam": "ipv4", 00:34:59.430 "trsvcid": "$NVMF_PORT", 00:34:59.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.430 "hdgst": ${hdgst:-false}, 00:34:59.430 "ddgst": ${ddgst:-false} 00:34:59.430 }, 00:34:59.430 "method": "bdev_nvme_attach_controller" 00:34:59.430 } 00:34:59.430 EOF 00:34:59.430 )") 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=124626 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=124629 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:59.430 { 00:34:59.430 "params": { 00:34:59.430 "name": "Nvme$subsystem", 00:34:59.430 "trtype": "$TEST_TRANSPORT", 00:34:59.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.430 "adrfam": "ipv4", 00:34:59.430 "trsvcid": "$NVMF_PORT", 00:34:59.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.430 "hdgst": ${hdgst:-false}, 00:34:59.430 "ddgst": ${ddgst:-false} 00:34:59.430 }, 00:34:59.430 "method": "bdev_nvme_attach_controller" 00:34:59.430 } 00:34:59.430 EOF 00:34:59.430 )") 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:59.430 { 00:34:59.430 "params": { 00:34:59.430 "name": "Nvme$subsystem", 00:34:59.430 "trtype": "$TEST_TRANSPORT", 00:34:59.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.430 "adrfam": "ipv4", 00:34:59.430 "trsvcid": "$NVMF_PORT", 00:34:59.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.430 "hdgst": ${hdgst:-false}, 00:34:59.430 "ddgst": ${ddgst:-false} 00:34:59.430 }, 00:34:59.430 "method": "bdev_nvme_attach_controller" 00:34:59.430 } 00:34:59.430 EOF 00:34:59.430 )") 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:59.430 { 00:34:59.430 "params": { 00:34:59.430 "name": "Nvme$subsystem", 00:34:59.430 "trtype": "$TEST_TRANSPORT", 00:34:59.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.430 "adrfam": "ipv4", 00:34:59.430 "trsvcid": "$NVMF_PORT", 00:34:59.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.430 "hdgst": ${hdgst:-false}, 00:34:59.430 "ddgst": ${ddgst:-false} 00:34:59.430 }, 00:34:59.430 "method": "bdev_nvme_attach_controller" 00:34:59.430 } 00:34:59.430 EOF 00:34:59.430 )") 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:59.430 "params": { 00:34:59.430 "name": "Nvme1", 00:34:59.430 "trtype": "tcp", 00:34:59.430 "traddr": "10.0.0.3", 00:34:59.430 "adrfam": "ipv4", 00:34:59.430 "trsvcid": "4420", 00:34:59.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:59.430 "hdgst": false, 00:34:59.430 "ddgst": false 00:34:59.430 }, 00:34:59.430 "method": "bdev_nvme_attach_controller" 00:34:59.430 }' 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:59.430 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:59.430 "params": { 00:34:59.431 "name": "Nvme1", 00:34:59.431 "trtype": "tcp", 00:34:59.431 "traddr": "10.0.0.3", 00:34:59.431 "adrfam": "ipv4", 00:34:59.431 "trsvcid": "4420", 00:34:59.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:59.431 "hdgst": false, 00:34:59.431 "ddgst": false 00:34:59.431 }, 00:34:59.431 "method": "bdev_nvme_attach_controller" 00:34:59.431 }' 00:34:59.431 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:59.431 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:59.690 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:59.690 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:59.690 "params": { 00:34:59.690 "name": "Nvme1", 00:34:59.690 "trtype": "tcp", 00:34:59.690 "traddr": "10.0.0.3", 00:34:59.690 "adrfam": "ipv4", 00:34:59.690 "trsvcid": "4420", 00:34:59.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:59.690 "hdgst": false, 00:34:59.690 "ddgst": false 00:34:59.690 }, 00:34:59.690 "method": "bdev_nvme_attach_controller" 00:34:59.690 }' 00:34:59.690 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:59.690 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:59.690 "params": { 00:34:59.690 "name": "Nvme1", 00:34:59.690 "trtype": "tcp", 00:34:59.690 "traddr": "10.0.0.3", 00:34:59.690 "adrfam": "ipv4", 00:34:59.690 "trsvcid": "4420", 00:34:59.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:59.690 "hdgst": false, 00:34:59.690 "ddgst": false 00:34:59.690 }, 00:34:59.690 "method": "bdev_nvme_attach_controller" 00:34:59.690 }' 00:34:59.690 [2024-11-20 17:55:41.252878] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.690 [2024-11-20 17:55:41.252960] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:59.690 17:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 124622 00:34:59.690 [2024-11-20 17:55:41.266007] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.690 [2024-11-20 17:55:41.266120] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:59.690 [2024-11-20 17:55:41.268816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.690 [2024-11-20 17:55:41.268897] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:59.690 [2024-11-20 17:55:41.293246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.690 [2024-11-20 17:55:41.293657] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:59.948 [2024-11-20 17:55:41.481339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.948 [2024-11-20 17:55:41.557111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.948 [2024-11-20 17:55:41.582434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:34:59.948 [2024-11-20 17:55:41.650387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:34:59.948 [2024-11-20 17:55:41.672990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.948 [2024-11-20 17:55:41.733363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.207 [2024-11-20 17:55:41.769277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:35:00.207 [2024-11-20 17:55:41.832538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:35:00.207 Running I/O for 1 seconds... 00:35:00.207 Running I/O for 1 seconds... 00:35:00.465 Running I/O for 1 seconds... 00:35:00.465 Running I/O for 1 seconds... 00:35:01.399 6326.00 IOPS, 24.71 MiB/s 00:35:01.399 Latency(us) 00:35:01.399 [2024-11-20T17:55:43.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.399 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:01.399 Nvme1n1 : 1.01 6377.02 24.91 0.00 0.00 19943.63 5272.67 25499.46 00:35:01.399 [2024-11-20T17:55:43.187Z] =================================================================================================================== 00:35:01.399 [2024-11-20T17:55:43.187Z] Total : 6377.02 24.91 0.00 0.00 19943.63 5272.67 25499.46 00:35:01.399 3501.00 IOPS, 13.68 MiB/s 00:35:01.399 Latency(us) 00:35:01.399 [2024-11-20T17:55:43.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.399 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:01.399 Nvme1n1 : 1.03 3526.52 13.78 0.00 0.00 35670.30 6940.86 59816.49 00:35:01.399 [2024-11-20T17:55:43.187Z] =================================================================================================================== 00:35:01.399 [2024-11-20T17:55:43.187Z] Total : 3526.52 13.78 0.00 0.00 35670.30 6940.86 59816.49 00:35:01.399 3834.00 IOPS, 14.98 MiB/s 00:35:01.399 Latency(us) 00:35:01.399 [2024-11-20T17:55:43.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.400 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:01.400 Nvme1n1 : 1.01 3957.43 15.46 0.00 0.00 32211.73 6434.44 68634.07 00:35:01.400 [2024-11-20T17:55:43.188Z] =================================================================================================================== 00:35:01.400 [2024-11-20T17:55:43.188Z] Total : 3957.43 15.46 0.00 0.00 32211.73 6434.44 68634.07 00:35:01.400 197864.00 IOPS, 772.91 MiB/s 00:35:01.400 Latency(us) 00:35:01.400 [2024-11-20T17:55:43.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.400 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:01.400 Nvme1n1 : 1.00 197472.12 771.38 0.00 0.00 644.63 329.54 1951.19 00:35:01.400 [2024-11-20T17:55:43.188Z] =================================================================================================================== 00:35:01.400 [2024-11-20T17:55:43.188Z] Total : 197472.12 771.38 0.00 0.00 644.63 329.54 1951.19 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 124624 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 124626 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 124629 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:01.658 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:01.916 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.916 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:01.916 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.916 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.916 rmmod nvme_tcp 00:35:01.916 rmmod nvme_fabrics 00:35:01.916 rmmod nvme_keyring 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 124569 ']' 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 124569 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 124569 ']' 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 124569 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124569 00:35:01.917 killing process with pid 124569 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124569' 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 124569 00:35:01.917 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 124569 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.176 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.435 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:35:02.435 00:35:02.435 real 0m4.794s 00:35:02.435 user 0m15.405s 00:35:02.435 sys 0m2.773s 00:35:02.435 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:02.435 17:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:02.435 ************************************ 00:35:02.435 END TEST nvmf_bdev_io_wait 00:35:02.435 ************************************ 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:02.435 ************************************ 00:35:02.435 START TEST nvmf_queue_depth 00:35:02.435 ************************************ 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:02.435 * Looking for test storage... 00:35:02.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.435 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.696 --rc genhtml_branch_coverage=1 00:35:02.696 --rc genhtml_function_coverage=1 00:35:02.696 --rc genhtml_legend=1 00:35:02.696 --rc geninfo_all_blocks=1 00:35:02.696 --rc geninfo_unexecuted_blocks=1 00:35:02.696 00:35:02.696 ' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.696 --rc genhtml_branch_coverage=1 00:35:02.696 --rc genhtml_function_coverage=1 00:35:02.696 --rc genhtml_legend=1 00:35:02.696 --rc geninfo_all_blocks=1 00:35:02.696 --rc geninfo_unexecuted_blocks=1 00:35:02.696 00:35:02.696 ' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.696 --rc genhtml_branch_coverage=1 00:35:02.696 --rc genhtml_function_coverage=1 00:35:02.696 --rc genhtml_legend=1 00:35:02.696 --rc geninfo_all_blocks=1 00:35:02.696 --rc geninfo_unexecuted_blocks=1 00:35:02.696 00:35:02.696 ' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.696 --rc genhtml_branch_coverage=1 00:35:02.696 --rc genhtml_function_coverage=1 00:35:02.696 --rc genhtml_legend=1 00:35:02.696 --rc geninfo_all_blocks=1 00:35:02.696 --rc geninfo_unexecuted_blocks=1 00:35:02.696 00:35:02.696 ' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:02.696 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:02.697 Cannot find device "nvmf_init_br" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:02.697 Cannot find device "nvmf_init_br2" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:02.697 Cannot find device "nvmf_tgt_br" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:02.697 Cannot find device "nvmf_tgt_br2" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:02.697 Cannot find device "nvmf_init_br" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:02.697 Cannot find device "nvmf_init_br2" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:02.697 Cannot find device "nvmf_tgt_br" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:02.697 Cannot find device "nvmf_tgt_br2" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:02.697 Cannot find device "nvmf_br" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:02.697 Cannot find device "nvmf_init_if" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:02.697 Cannot find device "nvmf_init_if2" 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:02.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:02.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:02.697 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:02.955 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:02.956 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:02.956 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:35:02.956 00:35:02.956 --- 10.0.0.3 ping statistics --- 00:35:02.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.956 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:02.956 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:02.956 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:35:02.956 00:35:02.956 --- 10.0.0.4 ping statistics --- 00:35:02.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.956 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:02.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:35:02.956 00:35:02.956 --- 10.0.0.1 ping statistics --- 00:35:02.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.956 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:02.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:35:02.956 00:35:02.956 --- 10.0.0.2 ping statistics --- 00:35:02.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.956 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=124917 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 124917 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 124917 ']' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:02.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:02.956 17:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:03.215 [2024-11-20 17:55:44.810895] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:03.215 [2024-11-20 17:55:44.812280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:03.215 [2024-11-20 17:55:44.812362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.215 [2024-11-20 17:55:44.962555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.474 [2024-11-20 17:55:45.087547] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.474 [2024-11-20 17:55:45.087618] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.474 [2024-11-20 17:55:45.087632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.474 [2024-11-20 17:55:45.087642] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.474 [2024-11-20 17:55:45.087652] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.474 [2024-11-20 17:55:45.087691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.474 [2024-11-20 17:55:45.214000] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:03.474 [2024-11-20 17:55:45.214444] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:04.041 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:04.041 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:35:04.041 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:04.041 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:04.041 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.300 [2024-11-20 17:55:45.876737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.300 Malloc0 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.300 [2024-11-20 17:55:45.952673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.300 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=124963 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 124963 /var/tmp/bdevperf.sock 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 124963 ']' 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:04.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:04.301 17:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.301 [2024-11-20 17:55:46.022843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:04.301 [2024-11-20 17:55:46.022947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124963 ] 00:35:04.559 [2024-11-20 17:55:46.164843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.559 [2024-11-20 17:55:46.277467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.495 17:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:05.495 17:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:35:05.495 17:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:05.495 17:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.495 17:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:05.495 NVMe0n1 00:35:05.495 17:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.495 17:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:05.753 Running I/O for 10 seconds... 00:35:07.622 7168.00 IOPS, 28.00 MiB/s [2024-11-20T17:55:50.345Z] 7854.50 IOPS, 30.68 MiB/s [2024-11-20T17:55:51.719Z] 8005.00 IOPS, 31.27 MiB/s [2024-11-20T17:55:52.654Z] 8196.50 IOPS, 32.02 MiB/s [2024-11-20T17:55:53.628Z] 8310.80 IOPS, 32.46 MiB/s [2024-11-20T17:55:54.561Z] 8373.67 IOPS, 32.71 MiB/s [2024-11-20T17:55:55.494Z] 8317.29 IOPS, 32.49 MiB/s [2024-11-20T17:55:56.428Z] 8323.50 IOPS, 32.51 MiB/s [2024-11-20T17:55:57.363Z] 8371.89 IOPS, 32.70 MiB/s [2024-11-20T17:55:57.621Z] 8400.90 IOPS, 32.82 MiB/s 00:35:15.833 Latency(us) 00:35:15.833 [2024-11-20T17:55:57.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.833 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:15.833 Verification LBA range: start 0x0 length 0x4000 00:35:15.833 NVMe0n1 : 10.08 8429.58 32.93 0.00 0.00 120932.68 28955.00 122016.12 00:35:15.833 [2024-11-20T17:55:57.621Z] =================================================================================================================== 00:35:15.833 [2024-11-20T17:55:57.621Z] Total : 8429.58 32.93 0.00 0.00 120932.68 28955.00 122016.12 00:35:15.833 { 00:35:15.833 "results": [ 00:35:15.833 { 00:35:15.833 "job": "NVMe0n1", 00:35:15.833 "core_mask": "0x1", 00:35:15.833 "workload": "verify", 00:35:15.833 "status": "finished", 00:35:15.833 "verify_range": { 00:35:15.833 "start": 0, 00:35:15.833 "length": 16384 00:35:15.833 }, 00:35:15.833 "queue_depth": 1024, 00:35:15.833 "io_size": 4096, 00:35:15.833 "runtime": 10.083296, 00:35:15.833 "iops": 8429.5849293723, 00:35:15.833 "mibps": 32.92806613036055, 00:35:15.833 "io_failed": 0, 00:35:15.833 "io_timeout": 0, 00:35:15.833 "avg_latency_us": 120932.67705481841, 00:35:15.833 "min_latency_us": 28954.996363636365, 00:35:15.833 "max_latency_us": 122016.11636363636 00:35:15.833 } 00:35:15.833 ], 00:35:15.833 "core_count": 1 00:35:15.833 } 00:35:15.833 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 124963 00:35:15.833 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 124963 ']' 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 124963 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124963 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:15.834 killing process with pid 124963 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124963' 00:35:15.834 Received shutdown signal, test time was about 10.000000 seconds 00:35:15.834 00:35:15.834 Latency(us) 00:35:15.834 [2024-11-20T17:55:57.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.834 [2024-11-20T17:55:57.622Z] =================================================================================================================== 00:35:15.834 [2024-11-20T17:55:57.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 124963 00:35:15.834 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 124963 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.092 rmmod nvme_tcp 00:35:16.092 rmmod nvme_fabrics 00:35:16.092 rmmod nvme_keyring 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 124917 ']' 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 124917 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 124917 ']' 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 124917 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124917 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:16.092 killing process with pid 124917 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124917' 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 124917 00:35:16.092 17:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 124917 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:35:16.657 00:35:16.657 real 0m14.349s 00:35:16.657 user 0m22.786s 00:35:16.657 sys 0m2.959s 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:16.657 ************************************ 00:35:16.657 END TEST nvmf_queue_depth 00:35:16.657 ************************************ 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:16.657 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:16.916 ************************************ 00:35:16.916 START TEST nvmf_target_multipath 00:35:16.916 ************************************ 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:16.916 * Looking for test storage... 00:35:16.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.916 --rc genhtml_branch_coverage=1 00:35:16.916 --rc genhtml_function_coverage=1 00:35:16.916 --rc genhtml_legend=1 00:35:16.916 --rc geninfo_all_blocks=1 00:35:16.916 --rc geninfo_unexecuted_blocks=1 00:35:16.916 00:35:16.916 ' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.916 --rc genhtml_branch_coverage=1 00:35:16.916 --rc genhtml_function_coverage=1 00:35:16.916 --rc genhtml_legend=1 00:35:16.916 --rc geninfo_all_blocks=1 00:35:16.916 --rc geninfo_unexecuted_blocks=1 00:35:16.916 00:35:16.916 ' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.916 --rc genhtml_branch_coverage=1 00:35:16.916 --rc genhtml_function_coverage=1 00:35:16.916 --rc genhtml_legend=1 00:35:16.916 --rc geninfo_all_blocks=1 00:35:16.916 --rc geninfo_unexecuted_blocks=1 00:35:16.916 00:35:16.916 ' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.916 --rc genhtml_branch_coverage=1 00:35:16.916 --rc genhtml_function_coverage=1 00:35:16.916 --rc genhtml_legend=1 00:35:16.916 --rc geninfo_all_blocks=1 00:35:16.916 --rc geninfo_unexecuted_blocks=1 00:35:16.916 00:35:16.916 ' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.916 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:16.917 Cannot find device "nvmf_init_br" 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:16.917 Cannot find device "nvmf_init_br2" 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:35:16.917 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:17.175 Cannot find device "nvmf_tgt_br" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:17.175 Cannot find device "nvmf_tgt_br2" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:17.175 Cannot find device "nvmf_init_br" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:17.175 Cannot find device "nvmf_init_br2" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:17.175 Cannot find device "nvmf_tgt_br" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:17.175 Cannot find device "nvmf_tgt_br2" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:17.175 Cannot find device "nvmf_br" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:17.175 Cannot find device "nvmf_init_if" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:17.175 Cannot find device "nvmf_init_if2" 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:17.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:17.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:35:17.175 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:17.176 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:17.435 17:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:17.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:17.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:35:17.435 00:35:17.435 --- 10.0.0.3 ping statistics --- 00:35:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.435 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:17.435 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:17.435 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:35:17.435 00:35:17.435 --- 10.0.0.4 ping statistics --- 00:35:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.435 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:17.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:35:17.435 00:35:17.435 --- 10.0.0.1 ping statistics --- 00:35:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.435 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:17.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:35:17.435 00:35:17.435 --- 10.0.0.2 ping statistics --- 00:35:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.435 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:35:17.435 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=125345 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 125345 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 125345 ']' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:17.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:17.436 17:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:17.436 [2024-11-20 17:55:59.199538] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:17.436 [2024-11-20 17:55:59.200893] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:17.436 [2024-11-20 17:55:59.200985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.694 [2024-11-20 17:55:59.347259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:17.694 [2024-11-20 17:55:59.454209] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.694 [2024-11-20 17:55:59.454277] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.694 [2024-11-20 17:55:59.454291] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.694 [2024-11-20 17:55:59.454301] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.694 [2024-11-20 17:55:59.454310] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.694 [2024-11-20 17:55:59.454481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.694 [2024-11-20 17:55:59.454642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:17.694 [2024-11-20 17:55:59.455365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:17.694 [2024-11-20 17:55:59.455438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.951 [2024-11-20 17:55:59.559203] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:17.951 [2024-11-20 17:55:59.559654] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:17.951 [2024-11-20 17:55:59.559732] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:17.951 [2024-11-20 17:55:59.559818] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:17.951 [2024-11-20 17:55:59.560111] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:18.518 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:18.518 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:35:18.518 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:18.518 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:18.518 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:18.518 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.518 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:18.776 [2024-11-20 17:56:00.536477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.035 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:19.293 Malloc0 00:35:19.293 17:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:35:19.552 17:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:19.810 17:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:20.068 [2024-11-20 17:56:01.784900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:20.068 17:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:35:20.326 [2024-11-20 17:56:02.096421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:35:20.584 17:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:35:20.585 17:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:35:20.843 17:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:35:20.843 17:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:35:20.843 17:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:20.843 17:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:35:20.843 17:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=125482 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:35:22.746 17:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:35:22.746 [global] 00:35:22.746 thread=1 00:35:22.746 invalidate=1 00:35:22.746 rw=randrw 00:35:22.746 time_based=1 00:35:22.746 runtime=6 00:35:22.746 ioengine=libaio 00:35:22.746 direct=1 00:35:22.746 bs=4096 00:35:22.746 iodepth=128 00:35:22.746 norandommap=0 00:35:22.746 numjobs=1 00:35:22.746 00:35:22.746 verify_dump=1 00:35:22.746 verify_backlog=512 00:35:22.746 verify_state_save=0 00:35:22.746 do_verify=1 00:35:22.746 verify=crc32c-intel 00:35:22.746 [job0] 00:35:22.746 filename=/dev/nvme0n1 00:35:22.746 Could not set queue depth (nvme0n1) 00:35:23.005 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:23.005 fio-3.35 00:35:23.005 Starting 1 thread 00:35:23.942 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:35:23.942 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:35:24.200 17:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:35:25.575 17:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:35:25.575 17:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:25.575 17:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:35:25.575 17:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:35:25.575 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:35:25.833 17:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:35:27.208 17:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:35:27.208 17:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:27.208 17:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:35:27.208 17:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 125482 00:35:29.129 00:35:29.129 job0: (groupid=0, jobs=1): err= 0: pid=125503: Wed Nov 20 17:56:10 2024 00:35:29.129 read: IOPS=10.6k, BW=41.5MiB/s (43.6MB/s)(250MiB/6007msec) 00:35:29.129 slat (usec): min=4, max=5515, avg=53.88, stdev=230.40 00:35:29.129 clat (usec): min=1200, max=15046, avg=8089.20, stdev=1261.69 00:35:29.129 lat (usec): min=1237, max=15407, avg=8143.08, stdev=1273.06 00:35:29.129 clat percentiles (usec): 00:35:29.129 | 1.00th=[ 4686], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7242], 00:35:29.129 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:35:29.129 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10290], 00:35:29.129 | 99.00th=[12256], 99.50th=[12780], 99.90th=[13566], 99.95th=[13829], 00:35:29.129 | 99.99th=[14484] 00:35:29.129 bw ( KiB/s): min= 4088, max=28144, per=51.34%, avg=21843.64, stdev=8132.90, samples=11 00:35:29.129 iops : min= 1022, max= 7036, avg=5460.91, stdev=2033.22, samples=11 00:35:29.129 write: IOPS=6351, BW=24.8MiB/s (26.0MB/s)(129MiB/5188msec); 0 zone resets 00:35:29.129 slat (usec): min=8, max=4114, avg=65.42, stdev=137.89 00:35:29.129 clat (usec): min=739, max=14183, avg=7432.63, stdev=1091.10 00:35:29.129 lat (usec): min=915, max=14208, avg=7498.05, stdev=1095.04 00:35:29.129 clat percentiles (usec): 00:35:29.130 | 1.00th=[ 3884], 5.00th=[ 5538], 10.00th=[ 6390], 20.00th=[ 6915], 00:35:29.130 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:35:29.130 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:35:29.130 | 99.00th=[11076], 99.50th=[11994], 99.90th=[13173], 99.95th=[13304], 00:35:29.130 | 99.99th=[13829] 00:35:29.130 bw ( KiB/s): min= 4400, max=28208, per=86.38%, avg=21944.00, stdev=8012.18, samples=11 00:35:29.130 iops : min= 1100, max= 7052, avg=5486.00, stdev=2003.04, samples=11 00:35:29.130 lat (usec) : 750=0.01%, 1000=0.01% 00:35:29.130 lat (msec) : 2=0.04%, 4=0.51%, 10=94.89%, 20=4.56% 00:35:29.130 cpu : usr=5.71%, sys=23.48%, ctx=8272, majf=0, minf=145 00:35:29.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:35:29.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:29.130 issued rwts: total=63895,32949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:29.130 00:35:29.130 Run status group 0 (all jobs): 00:35:29.130 READ: bw=41.5MiB/s (43.6MB/s), 41.5MiB/s-41.5MiB/s (43.6MB/s-43.6MB/s), io=250MiB (262MB), run=6007-6007msec 00:35:29.130 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=129MiB (135MB), run=5188-5188msec 00:35:29.130 00:35:29.130 Disk stats (read/write): 00:35:29.130 nvme0n1: ios=62982/32342, merge=0/0, ticks=476049/232120, in_queue=708169, util=98.67% 00:35:29.130 17:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:35:29.388 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:35:29.647 17:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:35:30.583 17:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:35:30.583 17:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:30.583 17:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:35:30.583 17:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:35:30.583 17:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=125634 00:35:30.583 17:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:35:30.583 17:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:35:30.841 [global] 00:35:30.841 thread=1 00:35:30.841 invalidate=1 00:35:30.841 rw=randrw 00:35:30.841 time_based=1 00:35:30.841 runtime=6 00:35:30.841 ioengine=libaio 00:35:30.841 direct=1 00:35:30.841 bs=4096 00:35:30.841 iodepth=128 00:35:30.841 norandommap=0 00:35:30.841 numjobs=1 00:35:30.841 00:35:30.841 verify_dump=1 00:35:30.841 verify_backlog=512 00:35:30.841 verify_state_save=0 00:35:30.841 do_verify=1 00:35:30.841 verify=crc32c-intel 00:35:30.841 [job0] 00:35:30.841 filename=/dev/nvme0n1 00:35:30.841 Could not set queue depth (nvme0n1) 00:35:30.841 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:30.841 fio-3.35 00:35:30.841 Starting 1 thread 00:35:31.775 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:35:32.033 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:35:32.291 17:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:35:33.226 17:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:35:33.227 17:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:33.227 17:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:35:33.227 17:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:35:33.795 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:35:34.053 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:34.054 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:35:34.054 17:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:35:35.017 17:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:35:35.017 17:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:35.017 17:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:35:35.017 17:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 125634 00:35:36.923 00:35:36.923 job0: (groupid=0, jobs=1): err= 0: pid=125655: Wed Nov 20 17:56:18 2024 00:35:36.923 read: IOPS=10.7k, BW=41.9MiB/s (44.0MB/s)(252MiB/6006msec) 00:35:36.923 slat (usec): min=6, max=5555, avg=45.25, stdev=205.12 00:35:36.923 clat (usec): min=447, max=50048, avg=8027.46, stdev=1815.43 00:35:36.923 lat (usec): min=463, max=50059, avg=8072.70, stdev=1823.44 00:35:36.923 clat percentiles (usec): 00:35:36.923 | 1.00th=[ 3228], 5.00th=[ 5211], 10.00th=[ 6194], 20.00th=[ 7111], 00:35:36.923 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8225], 00:35:36.923 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10945], 00:35:36.923 | 99.00th=[13173], 99.50th=[14615], 99.90th=[17957], 99.95th=[20317], 00:35:36.923 | 99.99th=[44303] 00:35:36.923 bw ( KiB/s): min=12760, max=28247, per=52.80%, avg=22675.64, stdev=5342.08, samples=11 00:35:36.923 iops : min= 3190, max= 7061, avg=5668.82, stdev=1335.43, samples=11 00:35:36.923 write: IOPS=6441, BW=25.2MiB/s (26.4MB/s)(135MiB/5358msec); 0 zone resets 00:35:36.923 slat (usec): min=14, max=2625, avg=56.80, stdev=122.71 00:35:36.923 clat (usec): min=436, max=20083, avg=7094.57, stdev=1624.14 00:35:36.923 lat (usec): min=470, max=20114, avg=7151.37, stdev=1629.23 00:35:36.923 clat percentiles (usec): 00:35:36.923 | 1.00th=[ 2311], 5.00th=[ 4228], 10.00th=[ 5014], 20.00th=[ 6194], 00:35:36.923 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7504], 00:35:36.923 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9110], 00:35:36.923 | 99.00th=[12256], 99.50th=[13698], 99.90th=[15795], 99.95th=[17695], 00:35:36.923 | 99.99th=[19792] 00:35:36.923 bw ( KiB/s): min=12992, max=27552, per=88.13%, avg=22707.09, stdev=5046.34, samples=11 00:35:36.923 iops : min= 3248, max= 6888, avg=5676.73, stdev=1261.55, samples=11 00:35:36.923 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.04% 00:35:36.923 lat (msec) : 2=0.45%, 4=2.06%, 10=90.62%, 20=6.77%, 50=0.03% 00:35:36.923 lat (msec) : 100=0.01% 00:35:36.923 cpu : usr=5.89%, sys=23.73%, ctx=8003, majf=0, minf=102 00:35:36.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:35:36.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:36.923 issued rwts: total=64481,34511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:36.923 00:35:36.923 Run status group 0 (all jobs): 00:35:36.923 READ: bw=41.9MiB/s (44.0MB/s), 41.9MiB/s-41.9MiB/s (44.0MB/s-44.0MB/s), io=252MiB (264MB), run=6006-6006msec 00:35:36.923 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=135MiB (141MB), run=5358-5358msec 00:35:36.923 00:35:36.923 Disk stats (read/write): 00:35:36.923 nvme0n1: ios=63549/33878, merge=0/0, ticks=478024/230578, in_queue=708602, util=98.68% 00:35:36.923 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:37.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:35:37.182 17:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.441 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.441 rmmod nvme_tcp 00:35:37.441 rmmod nvme_fabrics 00:35:37.441 rmmod nvme_keyring 00:35:37.442 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 125345 ']' 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 125345 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 125345 ']' 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 125345 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125345 00:35:37.700 killing process with pid 125345 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125345' 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 125345 00:35:37.700 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 125345 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:37.960 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:35:38.220 ************************************ 00:35:38.220 END TEST nvmf_target_multipath 00:35:38.220 ************************************ 00:35:38.220 00:35:38.220 real 0m21.351s 00:35:38.220 user 1m12.595s 00:35:38.220 sys 0m8.567s 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:38.220 ************************************ 00:35:38.220 START TEST nvmf_zcopy 00:35:38.220 ************************************ 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:38.220 * Looking for test storage... 00:35:38.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:35:38.220 17:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:38.479 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.480 --rc genhtml_branch_coverage=1 00:35:38.480 --rc genhtml_function_coverage=1 00:35:38.480 --rc genhtml_legend=1 00:35:38.480 --rc geninfo_all_blocks=1 00:35:38.480 --rc geninfo_unexecuted_blocks=1 00:35:38.480 00:35:38.480 ' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.480 --rc genhtml_branch_coverage=1 00:35:38.480 --rc genhtml_function_coverage=1 00:35:38.480 --rc genhtml_legend=1 00:35:38.480 --rc geninfo_all_blocks=1 00:35:38.480 --rc geninfo_unexecuted_blocks=1 00:35:38.480 00:35:38.480 ' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.480 --rc genhtml_branch_coverage=1 00:35:38.480 --rc genhtml_function_coverage=1 00:35:38.480 --rc genhtml_legend=1 00:35:38.480 --rc geninfo_all_blocks=1 00:35:38.480 --rc geninfo_unexecuted_blocks=1 00:35:38.480 00:35:38.480 ' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.480 --rc genhtml_branch_coverage=1 00:35:38.480 --rc genhtml_function_coverage=1 00:35:38.480 --rc genhtml_legend=1 00:35:38.480 --rc geninfo_all_blocks=1 00:35:38.480 --rc geninfo_unexecuted_blocks=1 00:35:38.480 00:35:38.480 ' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:35:38.480 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:38.481 Cannot find device "nvmf_init_br" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:38.481 Cannot find device "nvmf_init_br2" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:38.481 Cannot find device "nvmf_tgt_br" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:38.481 Cannot find device "nvmf_tgt_br2" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:38.481 Cannot find device "nvmf_init_br" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:38.481 Cannot find device "nvmf_init_br2" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:38.481 Cannot find device "nvmf_tgt_br" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:38.481 Cannot find device "nvmf_tgt_br2" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:38.481 Cannot find device "nvmf_br" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:38.481 Cannot find device "nvmf_init_if" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:38.481 Cannot find device "nvmf_init_if2" 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:38.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:38.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:38.481 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:38.740 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:38.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:38.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:35:38.741 00:35:38.741 --- 10.0.0.3 ping statistics --- 00:35:38.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.741 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:38.741 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:38.741 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:35:38.741 00:35:38.741 --- 10.0.0.4 ping statistics --- 00:35:38.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.741 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:38.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:35:38.741 00:35:38.741 --- 10.0.0.1 ping statistics --- 00:35:38.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.741 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:38.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:35:38.741 00:35:38.741 --- 10.0.0.2 ping statistics --- 00:35:38.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.741 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=126007 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 126007 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 126007 ']' 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:38.741 17:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:39.019 [2024-11-20 17:56:20.536107] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:39.019 [2024-11-20 17:56:20.537392] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:39.019 [2024-11-20 17:56:20.537468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.019 [2024-11-20 17:56:20.682839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.019 [2024-11-20 17:56:20.777190] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.019 [2024-11-20 17:56:20.777241] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.019 [2024-11-20 17:56:20.777253] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.019 [2024-11-20 17:56:20.777262] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.019 [2024-11-20 17:56:20.777270] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.019 [2024-11-20 17:56:20.777299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.303 [2024-11-20 17:56:20.865993] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:39.303 [2024-11-20 17:56:20.866380] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:39.869 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:39.869 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:35:39.869 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:39.869 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:39.869 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.127 [2024-11-20 17:56:21.682197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.127 [2024-11-20 17:56:21.702397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.127 malloc0 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:40.127 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:35:40.128 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:35:40.128 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:40.128 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:40.128 { 00:35:40.128 "params": { 00:35:40.128 "name": "Nvme$subsystem", 00:35:40.128 "trtype": "$TEST_TRANSPORT", 00:35:40.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.128 "adrfam": "ipv4", 00:35:40.128 "trsvcid": "$NVMF_PORT", 00:35:40.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.128 "hdgst": ${hdgst:-false}, 00:35:40.128 "ddgst": ${ddgst:-false} 00:35:40.128 }, 00:35:40.128 "method": "bdev_nvme_attach_controller" 00:35:40.128 } 00:35:40.128 EOF 00:35:40.128 )") 00:35:40.128 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:35:40.128 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:35:40.128 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:35:40.128 17:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:40.128 "params": { 00:35:40.128 "name": "Nvme1", 00:35:40.128 "trtype": "tcp", 00:35:40.128 "traddr": "10.0.0.3", 00:35:40.128 "adrfam": "ipv4", 00:35:40.128 "trsvcid": "4420", 00:35:40.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.128 "hdgst": false, 00:35:40.128 "ddgst": false 00:35:40.128 }, 00:35:40.128 "method": "bdev_nvme_attach_controller" 00:35:40.128 }' 00:35:40.128 [2024-11-20 17:56:21.831956] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:40.128 [2024-11-20 17:56:21.832104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126054 ] 00:35:40.386 [2024-11-20 17:56:21.975587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.386 [2024-11-20 17:56:22.079995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.645 Running I/O for 10 seconds... 00:35:42.517 5363.00 IOPS, 41.90 MiB/s [2024-11-20T17:56:25.292Z] 5366.00 IOPS, 41.92 MiB/s [2024-11-20T17:56:26.683Z] 5419.67 IOPS, 42.34 MiB/s [2024-11-20T17:56:27.619Z] 5481.50 IOPS, 42.82 MiB/s [2024-11-20T17:56:28.556Z] 5511.40 IOPS, 43.06 MiB/s [2024-11-20T17:56:29.492Z] 5542.00 IOPS, 43.30 MiB/s [2024-11-20T17:56:30.434Z] 5551.14 IOPS, 43.37 MiB/s [2024-11-20T17:56:31.369Z] 5544.88 IOPS, 43.32 MiB/s [2024-11-20T17:56:32.305Z] 5532.44 IOPS, 43.22 MiB/s [2024-11-20T17:56:32.305Z] 5520.80 IOPS, 43.13 MiB/s 00:35:50.517 Latency(us) 00:35:50.517 [2024-11-20T17:56:32.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.517 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:50.517 Verification LBA range: start 0x0 length 0x1000 00:35:50.517 Nvme1n1 : 10.02 5524.15 43.16 0.00 0.00 23099.16 2815.07 30384.87 00:35:50.517 [2024-11-20T17:56:32.305Z] =================================================================================================================== 00:35:50.517 [2024-11-20T17:56:32.305Z] Total : 5524.15 43.16 0.00 0.00 23099.16 2815.07 30384.87 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=126167 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:50.775 { 00:35:50.775 "params": { 00:35:50.775 "name": "Nvme$subsystem", 00:35:50.775 "trtype": "$TEST_TRANSPORT", 00:35:50.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.775 "adrfam": "ipv4", 00:35:50.775 "trsvcid": "$NVMF_PORT", 00:35:50.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.775 "hdgst": ${hdgst:-false}, 00:35:50.775 "ddgst": ${ddgst:-false} 00:35:50.775 }, 00:35:50.775 "method": "bdev_nvme_attach_controller" 00:35:50.775 } 00:35:50.775 EOF 00:35:50.775 )") 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:35:50.775 [2024-11-20 17:56:32.497883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.775 [2024-11-20 17:56:32.497962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.775 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:35:50.776 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:50.776 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:35:50.776 17:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:50.776 "params": { 00:35:50.776 "name": "Nvme1", 00:35:50.776 "trtype": "tcp", 00:35:50.776 "traddr": "10.0.0.3", 00:35:50.776 "adrfam": "ipv4", 00:35:50.776 "trsvcid": "4420", 00:35:50.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:50.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:50.776 "hdgst": false, 00:35:50.776 "ddgst": false 00:35:50.776 }, 00:35:50.776 "method": "bdev_nvme_attach_controller" 00:35:50.776 }' 00:35:50.776 [2024-11-20 17:56:32.509832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.776 [2024-11-20 17:56:32.509861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.776 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:50.776 [2024-11-20 17:56:32.521804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.776 [2024-11-20 17:56:32.521827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.776 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:50.776 [2024-11-20 17:56:32.533800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.776 [2024-11-20 17:56:32.533821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.776 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:50.776 [2024-11-20 17:56:32.545868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.776 [2024-11-20 17:56:32.545889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.776 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:50.776 [2024-11-20 17:56:32.557840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.776 [2024-11-20 17:56:32.557863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.776 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.035 [2024-11-20 17:56:32.569770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.035 [2024-11-20 17:56:32.569792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.035 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.035 [2024-11-20 17:56:32.581778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.035 [2024-11-20 17:56:32.581801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.035 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.035 [2024-11-20 17:56:32.593793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.035 [2024-11-20 17:56:32.593812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.035 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.035 [2024-11-20 17:56:32.598679] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:51.035 [2024-11-20 17:56:32.599254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126167 ] 00:35:51.035 [2024-11-20 17:56:32.605781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.035 [2024-11-20 17:56:32.605803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.035 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.035 [2024-11-20 17:56:32.617789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.035 [2024-11-20 17:56:32.617812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.035 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.035 [2024-11-20 17:56:32.629876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.035 [2024-11-20 17:56:32.629916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.035 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.035 [2024-11-20 17:56:32.641850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.035 [2024-11-20 17:56:32.641881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.653831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.653860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.665854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.665877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.677789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.677813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.689775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.689799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.701785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.701806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.713782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.713804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.725849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.725874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.737781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.737804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 [2024-11-20 17:56:32.739525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.745763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.745786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.753768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.753793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.761764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.761788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.769764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.769788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.777764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.777791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.785793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.785817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.793811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.793835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.805778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.805805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.036 [2024-11-20 17:56:32.817802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.036 [2024-11-20 17:56:32.817830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.036 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.295 [2024-11-20 17:56:32.829793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.295 [2024-11-20 17:56:32.829817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.295 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.295 [2024-11-20 17:56:32.839636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.295 [2024-11-20 17:56:32.841770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.295 [2024-11-20 17:56:32.841794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.295 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.853809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.853835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.865772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.865797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.877773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.877798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.889770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.889794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.901769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.901793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.913770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.913793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.925770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.925794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.937771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.937794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.945970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.946003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.953837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.953868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.961782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.961811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.969772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.969798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.977774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.977802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.985782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.985810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:32.993774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:32.993801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:33.001988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:33.002022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:33.009775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:33.009802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 Running I/O for 5 seconds... 00:35:51.296 [2024-11-20 17:56:33.024750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:33.024784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:33.038267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:33.038301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:33.057663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:33.057701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:33.069088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:33.069130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.296 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.296 [2024-11-20 17:56:33.081963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.296 [2024-11-20 17:56:33.081994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.555 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.555 [2024-11-20 17:56:33.100925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.555 [2024-11-20 17:56:33.100957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.555 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.555 [2024-11-20 17:56:33.111641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.555 [2024-11-20 17:56:33.111672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.555 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.555 [2024-11-20 17:56:33.127118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.555 [2024-11-20 17:56:33.127167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.555 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.145219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.145253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.157394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.157428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.166792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.166834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.183784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.183817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.199979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.200012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.214789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.214850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.234282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.234314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.254188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.254222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.272635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.272670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.286864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.286897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.305923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.305959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.315883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.315939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.556 [2024-11-20 17:56:33.331692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.556 [2024-11-20 17:56:33.331730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.556 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.347694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.347729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.357955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.357990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.370333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.370367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.389523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.389560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.400236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.400286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.415694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.415732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.428655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.428690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.438653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.438685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.455342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.455375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.473130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.473164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.483740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.483773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.498959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.498993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.515426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.515461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.530973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.531014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.815 [2024-11-20 17:56:33.549508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.815 [2024-11-20 17:56:33.549543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.815 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.816 [2024-11-20 17:56:33.560869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.816 [2024-11-20 17:56:33.560903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.816 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.816 [2024-11-20 17:56:33.572991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.816 [2024-11-20 17:56:33.573047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.816 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.816 [2024-11-20 17:56:33.585583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.816 [2024-11-20 17:56:33.585619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.816 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:51.816 [2024-11-20 17:56:33.597571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.816 [2024-11-20 17:56:33.597618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.816 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.609490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.609528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.620686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.620726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.634132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.634169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.654140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.654192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.671733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.671772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.686229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.686265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.706608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.706651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.723889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.723932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.738715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.738750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.756697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.756731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.769874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.769908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.780190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.780243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.795871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.795904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.812370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.812403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.826789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.826849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.846074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.846107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.075 [2024-11-20 17:56:33.855400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.075 [2024-11-20 17:56:33.855431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.075 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.871027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.871061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.889457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.889488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.900207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.900255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.915289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.915319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.933209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.933240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.945116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.945147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.958109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.958172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.977155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.977183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.990088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.990118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:33.998991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:33.999024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:34.014640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:34.014673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 11262.00 IOPS, 87.98 MiB/s [2024-11-20T17:56:34.122Z] 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:34.033125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:34.033156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.334 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.334 [2024-11-20 17:56:34.048012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.334 [2024-11-20 17:56:34.048075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.335 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.335 [2024-11-20 17:56:34.063614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.335 [2024-11-20 17:56:34.063646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.335 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.335 [2024-11-20 17:56:34.080958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.335 [2024-11-20 17:56:34.080993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.335 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.335 [2024-11-20 17:56:34.093482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.335 [2024-11-20 17:56:34.093512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.335 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.335 [2024-11-20 17:56:34.102925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.335 [2024-11-20 17:56:34.102958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.335 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.335 [2024-11-20 17:56:34.118970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.335 [2024-11-20 17:56:34.119010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.136573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.136608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.151498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.151533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.168934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.168972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.181961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.181994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.201439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.201473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.212430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.212462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.227719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.227750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.245239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.245271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.256550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.256582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.270288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.270323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.289758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.289792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.300522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.300571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.316308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.316343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.329991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.330024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.349237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.349271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.360216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.360251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.594 [2024-11-20 17:56:34.376537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.594 [2024-11-20 17:56:34.376582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.594 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.387487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.387520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.400530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.400576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.415172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.415242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.433476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.433553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.444054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.444106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.459816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.459852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.474376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.474410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.494255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.494286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.513188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.513224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.524195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.524224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.540274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.540306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.853 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.853 [2024-11-20 17:56:34.555550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.853 [2024-11-20 17:56:34.555581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.854 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.854 [2024-11-20 17:56:34.570896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.854 [2024-11-20 17:56:34.570929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.854 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.854 [2024-11-20 17:56:34.580376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.854 [2024-11-20 17:56:34.580406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.854 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.854 [2024-11-20 17:56:34.595334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.854 [2024-11-20 17:56:34.595367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.854 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.854 [2024-11-20 17:56:34.609130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.854 [2024-11-20 17:56:34.609160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.854 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:52.854 [2024-11-20 17:56:34.623305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.854 [2024-11-20 17:56:34.623336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.854 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.640288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.640320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.653542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.653574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.663238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.663298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.678510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.678542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.697604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.697636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.708734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.708768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.723219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.723250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.740870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.740906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.752165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.752203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.767670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.767733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.780939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.781004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.790643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.790700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.806107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.806155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.825411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.114 [2024-11-20 17:56:34.825446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.114 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.114 [2024-11-20 17:56:34.844179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.115 [2024-11-20 17:56:34.844212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.115 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.115 [2024-11-20 17:56:34.859215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.115 [2024-11-20 17:56:34.859248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.115 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.115 [2024-11-20 17:56:34.876471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.115 [2024-11-20 17:56:34.876505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.115 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.115 [2024-11-20 17:56:34.891156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.115 [2024-11-20 17:56:34.891190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.115 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.373 [2024-11-20 17:56:34.909567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.373 [2024-11-20 17:56:34.909615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.373 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.373 [2024-11-20 17:56:34.928574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.373 [2024-11-20 17:56:34.928608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.373 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.373 [2024-11-20 17:56:34.943190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.373 [2024-11-20 17:56:34.943238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.373 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.373 [2024-11-20 17:56:34.959780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.373 [2024-11-20 17:56:34.959849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.373 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.373 [2024-11-20 17:56:34.975671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.373 [2024-11-20 17:56:34.975735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.373 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:34.991280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:34.991313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.009000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.009033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 11397.50 IOPS, 89.04 MiB/s [2024-11-20T17:56:35.162Z] [2024-11-20 17:56:35.021181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.021212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.033745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.033775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.042478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.042508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.055809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.055840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.069803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.069836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.080653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.080693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.095380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.095412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.110491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.110522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.130101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.130133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.141511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.141546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.374 [2024-11-20 17:56:35.152706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.374 [2024-11-20 17:56:35.152738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.374 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.632 [2024-11-20 17:56:35.166120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.632 [2024-11-20 17:56:35.166149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.632 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.632 [2024-11-20 17:56:35.187039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.632 [2024-11-20 17:56:35.187141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.203508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.203582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.218615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.218649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.228405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.228438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.243660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.243692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.256892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.256924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.269921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.269952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.279423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.279453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.294545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.294576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.312152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.312184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.327248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.327278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.344603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.344634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.359765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.359799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.376089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.376133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.390475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.390507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.400036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.400084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.633 [2024-11-20 17:56:35.415278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.633 [2024-11-20 17:56:35.415309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.633 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.891 [2024-11-20 17:56:35.431323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.891 [2024-11-20 17:56:35.431356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.891 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.891 [2024-11-20 17:56:35.448023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.891 [2024-11-20 17:56:35.448055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.891 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.891 [2024-11-20 17:56:35.463774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.891 [2024-11-20 17:56:35.463807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.891 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.891 [2024-11-20 17:56:35.479302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.891 [2024-11-20 17:56:35.479350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.891 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.891 [2024-11-20 17:56:35.498219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.891 [2024-11-20 17:56:35.498253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.891 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.891 [2024-11-20 17:56:35.514211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.891 [2024-11-20 17:56:35.514244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.524337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.524368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.539251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.539284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.556130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.556160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.566794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.566852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.583734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.583767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.599948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.599983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.614927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.614961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.632915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.632948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.644548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.644580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.658728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.658760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.892 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:53.892 [2024-11-20 17:56:35.678203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.892 [2024-11-20 17:56:35.678236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.697429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.697477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.709089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.709122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.720396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.720458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.734016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.734075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.752845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.752877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.764490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.764522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.780693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.780727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.791902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.791968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.805610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.805641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.815097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.815127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.830024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.830099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.848655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.848687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.859474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.859505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.875346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.875394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.890727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.890760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.909327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.909362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.920877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.920908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.151 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.151 [2024-11-20 17:56:35.934342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.151 [2024-11-20 17:56:35.934376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:35.953637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:35.953673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:35.974197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:35.974236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:35.991016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:35.991051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:36.007814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:36.007847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 11378.33 IOPS, 88.89 MiB/s [2024-11-20T17:56:36.198Z] [2024-11-20 17:56:36.021898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:36.021931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:36.042532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:36.042567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:36.059928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:36.059961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:36.075491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:36.075524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.410 [2024-11-20 17:56:36.090681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.410 [2024-11-20 17:56:36.090713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.410 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.411 [2024-11-20 17:56:36.107699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.411 [2024-11-20 17:56:36.107733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.411 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.411 [2024-11-20 17:56:36.123280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.411 [2024-11-20 17:56:36.123315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.411 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.411 [2024-11-20 17:56:36.139437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.411 [2024-11-20 17:56:36.139489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.411 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.411 [2024-11-20 17:56:36.155153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.411 [2024-11-20 17:56:36.155188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.411 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.411 [2024-11-20 17:56:36.171858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.411 [2024-11-20 17:56:36.171892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.411 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.411 [2024-11-20 17:56:36.182903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.411 [2024-11-20 17:56:36.182936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.411 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.669 [2024-11-20 17:56:36.199676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.669 [2024-11-20 17:56:36.199721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.669 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.669 [2024-11-20 17:56:36.212683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.669 [2024-11-20 17:56:36.212714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.669 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.669 [2024-11-20 17:56:36.226331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.669 [2024-11-20 17:56:36.226368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.669 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.669 [2024-11-20 17:56:36.244995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.669 [2024-11-20 17:56:36.245028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.669 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.669 [2024-11-20 17:56:36.255994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.669 [2024-11-20 17:56:36.256042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.669 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.669 [2024-11-20 17:56:36.267525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.669 [2024-11-20 17:56:36.267557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.669 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.669 [2024-11-20 17:56:36.284107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.669 [2024-11-20 17:56:36.284148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.669 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.298789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.298846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.315148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.315181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.331597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.331631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.346906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.346941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.365628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.365663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.375854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.375887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.390915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.390949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.408608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.408640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.418719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.418751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.435403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.435438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.670 [2024-11-20 17:56:36.450952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.670 [2024-11-20 17:56:36.450987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.670 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.928 [2024-11-20 17:56:36.470063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.470117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.481219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.481251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.492644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.492692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.507550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.507581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.523878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.523910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.539197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.539232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.555741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.555774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.570311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.570344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.580734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.580765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.593363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.593412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.605215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.605247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.616974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.617005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.628984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.629016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.642418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.642466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.659537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.659569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.674721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.674756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.694230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.694264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.929 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:54.929 [2024-11-20 17:56:36.714564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.929 [2024-11-20 17:56:36.714597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.730636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.730669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.741171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.741199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.753785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.753817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.764966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.764998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.776238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.776272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.791319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.791369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.807836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.807868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.823192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.823224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.840133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.840166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.855605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.855639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.870829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.870863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.889496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.889529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.906150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.906184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.925436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.925471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.943180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.943214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.960187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.960237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.188 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.188 [2024-11-20 17:56:36.973094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.188 [2024-11-20 17:56:36.973140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.447 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.447 [2024-11-20 17:56:36.995034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.447 [2024-11-20 17:56:36.995082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.447 2024/11/20 17:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.447 [2024-11-20 17:56:37.011234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.447 [2024-11-20 17:56:37.011268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.447 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.447 11263.00 IOPS, 87.99 MiB/s [2024-11-20T17:56:37.235Z] [2024-11-20 17:56:37.027834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.447 [2024-11-20 17:56:37.027869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.447 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.447 [2024-11-20 17:56:37.042716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.447 [2024-11-20 17:56:37.042749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.447 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.447 [2024-11-20 17:56:37.052686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.447 [2024-11-20 17:56:37.052718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.447 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.447 [2024-11-20 17:56:37.065495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.447 [2024-11-20 17:56:37.065529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.447 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.447 [2024-11-20 17:56:37.077306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.077341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.091270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.091304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.108264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.108297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.122201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.122236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.141218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.141251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.152652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.152684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.166204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.166238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.185926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.185959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.196267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.196299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.210609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.210642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.448 [2024-11-20 17:56:37.227958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.448 [2024-11-20 17:56:37.227994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.448 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.243104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.243154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.259884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.259934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.275480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.275514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.291726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.291760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.307929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.307965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.322173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.322208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.341218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.341252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.360142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.360176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.375771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.375806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.390952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.390986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.409593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.409628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.421303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.421336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.432743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.432774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.447406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.447440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.462846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.462881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.707 [2024-11-20 17:56:37.481589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.707 [2024-11-20 17:56:37.481624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.707 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.499310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.499343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.966 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.514092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.514135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.966 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.533367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.533408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.966 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.550413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.550465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.966 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.567181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.567224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.966 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.584545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.584580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.966 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.606425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.606625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.966 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.966 [2024-11-20 17:56:37.624025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.966 [2024-11-20 17:56:37.624061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.634726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.634760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.651696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.651731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.664854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.664888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.675163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.675197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.690571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.690605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.705427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.705611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.715652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.715687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.731251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.731285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:55.967 [2024-11-20 17:56:37.747114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.967 [2024-11-20 17:56:37.747148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.967 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.225 [2024-11-20 17:56:37.766554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.225 [2024-11-20 17:56:37.766588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.225 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.225 [2024-11-20 17:56:37.783609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.225 [2024-11-20 17:56:37.783661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.225 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.225 [2024-11-20 17:56:37.799850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.225 [2024-11-20 17:56:37.799886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.225 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.225 [2024-11-20 17:56:37.814700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.225 [2024-11-20 17:56:37.814735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.225 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.225 [2024-11-20 17:56:37.831374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.225 [2024-11-20 17:56:37.831409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.225 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.225 [2024-11-20 17:56:37.847685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.225 [2024-11-20 17:56:37.847719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.225 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.225 [2024-11-20 17:56:37.863317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.225 [2024-11-20 17:56:37.863353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.879302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.879339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.898028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.898094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.909171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.909204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.923208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.923243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.938182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.938216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.958118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.958154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.969589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.969624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.981631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.981667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.226 [2024-11-20 17:56:37.998476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.226 [2024-11-20 17:56:37.998543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.226 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.484 [2024-11-20 17:56:38.017201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.484 [2024-11-20 17:56:38.017268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.484 11195.40 IOPS, 87.46 MiB/s [2024-11-20T17:56:38.272Z] 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.484 00:35:56.484 Latency(us) 00:35:56.484 [2024-11-20T17:56:38.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.484 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:56.484 Nvme1n1 : 5.01 11198.10 87.49 0.00 0.00 11416.51 2487.39 20614.05 00:35:56.484 [2024-11-20T17:56:38.272Z] =================================================================================================================== 00:35:56.484 [2024-11-20T17:56:38.272Z] Total : 11198.10 87.49 0.00 0.00 11416.51 2487.39 20614.05 00:35:56.484 [2024-11-20 17:56:38.029796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.484 [2024-11-20 17:56:38.029831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.484 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.484 [2024-11-20 17:56:38.041793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.484 [2024-11-20 17:56:38.041824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.484 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.484 [2024-11-20 17:56:38.053809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.484 [2024-11-20 17:56:38.053836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.484 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.484 [2024-11-20 17:56:38.065778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.484 [2024-11-20 17:56:38.065805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.484 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.484 [2024-11-20 17:56:38.077777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.484 [2024-11-20 17:56:38.077804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.484 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.484 [2024-11-20 17:56:38.089793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.484 [2024-11-20 17:56:38.089824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.484 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.101790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.101821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.113837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.113892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.125849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.125896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.137780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.137806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.149777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.149804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.161804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.161830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.173773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.173799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.185777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.185822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.197780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.197807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.209797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.209831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.221776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.221803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.233775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.233801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 [2024-11-20 17:56:38.245802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.485 [2024-11-20 17:56:38.245827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.485 2024/11/20 17:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:35:56.485 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (126167) - No such process 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 126167 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.485 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:56.744 delay0 00:35:56.744 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.744 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:56.744 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.744 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:56.744 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.744 17:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:35:56.744 [2024-11-20 17:56:38.437582] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:04.860 Initializing NVMe Controllers 00:36:04.860 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:04.860 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:04.860 Initialization complete. Launching workers. 00:36:04.860 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 225, failed: 25797 00:36:04.860 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 25872, failed to submit 150 00:36:04.860 success 25806, unsuccessful 66, failed 0 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:04.860 rmmod nvme_tcp 00:36:04.860 rmmod nvme_fabrics 00:36:04.860 rmmod nvme_keyring 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 126007 ']' 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 126007 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 126007 ']' 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 126007 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126007 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:04.860 killing process with pid 126007 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126007' 00:36:04.860 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 126007 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 126007 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:04.861 17:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:36:04.861 00:36:04.861 real 0m26.289s 00:36:04.861 user 0m38.032s 00:36:04.861 sys 0m9.529s 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:04.861 ************************************ 00:36:04.861 END TEST nvmf_zcopy 00:36:04.861 ************************************ 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:04.861 ************************************ 00:36:04.861 START TEST nvmf_nmic 00:36:04.861 ************************************ 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:04.861 * Looking for test storage... 00:36:04.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.861 --rc genhtml_branch_coverage=1 00:36:04.861 --rc genhtml_function_coverage=1 00:36:04.861 --rc genhtml_legend=1 00:36:04.861 --rc geninfo_all_blocks=1 00:36:04.861 --rc geninfo_unexecuted_blocks=1 00:36:04.861 00:36:04.861 ' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.861 --rc genhtml_branch_coverage=1 00:36:04.861 --rc genhtml_function_coverage=1 00:36:04.861 --rc genhtml_legend=1 00:36:04.861 --rc geninfo_all_blocks=1 00:36:04.861 --rc geninfo_unexecuted_blocks=1 00:36:04.861 00:36:04.861 ' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.861 --rc genhtml_branch_coverage=1 00:36:04.861 --rc genhtml_function_coverage=1 00:36:04.861 --rc genhtml_legend=1 00:36:04.861 --rc geninfo_all_blocks=1 00:36:04.861 --rc geninfo_unexecuted_blocks=1 00:36:04.861 00:36:04.861 ' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.861 --rc genhtml_branch_coverage=1 00:36:04.861 --rc genhtml_function_coverage=1 00:36:04.861 --rc genhtml_legend=1 00:36:04.861 --rc geninfo_all_blocks=1 00:36:04.861 --rc geninfo_unexecuted_blocks=1 00:36:04.861 00:36:04.861 ' 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.861 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:04.862 Cannot find device "nvmf_init_br" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:04.862 Cannot find device "nvmf_init_br2" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:04.862 Cannot find device "nvmf_tgt_br" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:04.862 Cannot find device "nvmf_tgt_br2" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:04.862 Cannot find device "nvmf_init_br" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:04.862 Cannot find device "nvmf_init_br2" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:04.862 Cannot find device "nvmf_tgt_br" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:04.862 Cannot find device "nvmf_tgt_br2" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:04.862 Cannot find device "nvmf_br" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:04.862 Cannot find device "nvmf_init_if" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:04.862 Cannot find device "nvmf_init_if2" 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:36:04.862 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:04.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:04.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:04.863 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:05.121 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:05.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:05.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:36:05.122 00:36:05.122 --- 10.0.0.3 ping statistics --- 00:36:05.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.122 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:05.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:05.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:36:05.122 00:36:05.122 --- 10.0.0.4 ping statistics --- 00:36:05.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.122 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:05.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:36:05.122 00:36:05.122 --- 10.0.0.1 ping statistics --- 00:36:05.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.122 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:05.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:36:05.122 00:36:05.122 --- 10.0.0.2 ping statistics --- 00:36:05.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.122 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=126539 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 126539 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 126539 ']' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:05.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:05.122 17:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.380 [2024-11-20 17:56:46.966600] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:05.380 [2024-11-20 17:56:46.967932] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:05.380 [2024-11-20 17:56:46.968019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.380 [2024-11-20 17:56:47.112478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:05.638 [2024-11-20 17:56:47.203683] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.638 [2024-11-20 17:56:47.203759] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.638 [2024-11-20 17:56:47.203781] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.638 [2024-11-20 17:56:47.203797] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.638 [2024-11-20 17:56:47.203822] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.638 [2024-11-20 17:56:47.204300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.638 [2024-11-20 17:56:47.204364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:05.638 [2024-11-20 17:56:47.204628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:05.638 [2024-11-20 17:56:47.204647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.638 [2024-11-20 17:56:47.307526] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:05.638 [2024-11-20 17:56:47.307908] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:05.638 [2024-11-20 17:56:47.308139] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:05.638 [2024-11-20 17:56:47.308527] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:05.638 [2024-11-20 17:56:47.309011] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 [2024-11-20 17:56:48.102638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 Malloc0 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 [2024-11-20 17:56:48.166978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:06.574 test case1: single bdev can't be used in multiple subsystems 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.575 [2024-11-20 17:56:48.190513] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:06.575 [2024-11-20 17:56:48.190563] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:06.575 [2024-11-20 17:56:48.190579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:06.575 2024/11/20 17:56:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:06.575 request: 00:36:06.575 { 00:36:06.575 "method": "nvmf_subsystem_add_ns", 00:36:06.575 "params": { 00:36:06.575 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:06.575 "namespace": { 00:36:06.575 "bdev_name": "Malloc0", 00:36:06.575 "no_auto_visible": false 00:36:06.575 } 00:36:06.575 } 00:36:06.575 } 00:36:06.575 Got JSON-RPC error response 00:36:06.575 GoRPCClient: error on JSON-RPC call 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:06.575 Adding namespace failed - expected result. 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:06.575 test case2: host connect to nvmf target in multiple paths 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.575 [2024-11-20 17:56:48.202653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:06.575 17:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:36:09.106 17:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:09.106 17:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:09.106 17:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:09.106 17:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:09.106 17:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:09.106 17:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:36:09.106 17:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:09.106 [global] 00:36:09.106 thread=1 00:36:09.106 invalidate=1 00:36:09.106 rw=write 00:36:09.106 time_based=1 00:36:09.106 runtime=1 00:36:09.106 ioengine=libaio 00:36:09.106 direct=1 00:36:09.106 bs=4096 00:36:09.106 iodepth=1 00:36:09.106 norandommap=0 00:36:09.106 numjobs=1 00:36:09.106 00:36:09.106 verify_dump=1 00:36:09.106 verify_backlog=512 00:36:09.106 verify_state_save=0 00:36:09.106 do_verify=1 00:36:09.106 verify=crc32c-intel 00:36:09.106 [job0] 00:36:09.106 filename=/dev/nvme0n1 00:36:09.106 Could not set queue depth (nvme0n1) 00:36:09.106 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:09.106 fio-3.35 00:36:09.106 Starting 1 thread 00:36:10.042 00:36:10.042 job0: (groupid=0, jobs=1): err= 0: pid=126642: Wed Nov 20 17:56:51 2024 00:36:10.042 read: IOPS=2356, BW=9427KiB/s (9653kB/s)(9436KiB/1001msec) 00:36:10.042 slat (nsec): min=13848, max=58284, avg=17526.05, stdev=4780.00 00:36:10.043 clat (usec): min=176, max=554, avg=219.28, stdev=19.91 00:36:10.043 lat (usec): min=191, max=568, avg=236.81, stdev=21.74 00:36:10.043 clat percentiles (usec): 00:36:10.043 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:36:10.043 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:36:10.043 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 245], 00:36:10.043 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 383], 99.95th=[ 445], 00:36:10.043 | 99.99th=[ 553] 00:36:10.043 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:36:10.043 slat (usec): min=19, max=159, avg=24.79, stdev= 6.43 00:36:10.043 clat (usec): min=109, max=364, avg=143.99, stdev=14.32 00:36:10.043 lat (usec): min=136, max=448, avg=168.78, stdev=17.19 00:36:10.043 clat percentiles (usec): 00:36:10.043 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 135], 00:36:10.043 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 145], 00:36:10.043 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 167], 00:36:10.043 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 322], 99.95th=[ 322], 00:36:10.043 | 99.99th=[ 363] 00:36:10.043 bw ( KiB/s): min=11832, max=11832, per=100.00%, avg=11832.00, stdev= 0.00, samples=1 00:36:10.043 iops : min= 2958, max= 2958, avg=2958.00, stdev= 0.00, samples=1 00:36:10.043 lat (usec) : 250=98.11%, 500=1.87%, 750=0.02% 00:36:10.043 cpu : usr=1.60%, sys=8.30%, ctx=4928, majf=0, minf=5 00:36:10.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.043 issued rwts: total=2359,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.043 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.043 00:36:10.043 Run status group 0 (all jobs): 00:36:10.043 READ: bw=9427KiB/s (9653kB/s), 9427KiB/s-9427KiB/s (9653kB/s-9653kB/s), io=9436KiB (9662kB), run=1001-1001msec 00:36:10.043 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:36:10.043 00:36:10.043 Disk stats (read/write): 00:36:10.043 nvme0n1: ios=2098/2397, merge=0/0, ticks=477/374, in_queue=851, util=91.78% 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:10.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.043 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.043 rmmod nvme_tcp 00:36:10.301 rmmod nvme_fabrics 00:36:10.301 rmmod nvme_keyring 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 126539 ']' 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 126539 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 126539 ']' 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 126539 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126539 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:10.301 killing process with pid 126539 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126539' 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 126539 00:36:10.301 17:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 126539 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:10.560 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:36:10.819 00:36:10.819 real 0m6.255s 00:36:10.819 user 0m15.263s 00:36:10.819 sys 0m1.868s 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.819 ************************************ 00:36:10.819 END TEST nvmf_nmic 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:10.819 ************************************ 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:10.819 ************************************ 00:36:10.819 START TEST nvmf_fio_target 00:36:10.819 ************************************ 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:10.819 * Looking for test storage... 00:36:10.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:10.819 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:11.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.079 --rc genhtml_branch_coverage=1 00:36:11.079 --rc genhtml_function_coverage=1 00:36:11.079 --rc genhtml_legend=1 00:36:11.079 --rc geninfo_all_blocks=1 00:36:11.079 --rc geninfo_unexecuted_blocks=1 00:36:11.079 00:36:11.079 ' 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:11.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.079 --rc genhtml_branch_coverage=1 00:36:11.079 --rc genhtml_function_coverage=1 00:36:11.079 --rc genhtml_legend=1 00:36:11.079 --rc geninfo_all_blocks=1 00:36:11.079 --rc geninfo_unexecuted_blocks=1 00:36:11.079 00:36:11.079 ' 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:11.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.079 --rc genhtml_branch_coverage=1 00:36:11.079 --rc genhtml_function_coverage=1 00:36:11.079 --rc genhtml_legend=1 00:36:11.079 --rc geninfo_all_blocks=1 00:36:11.079 --rc geninfo_unexecuted_blocks=1 00:36:11.079 00:36:11.079 ' 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:11.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.079 --rc genhtml_branch_coverage=1 00:36:11.079 --rc genhtml_function_coverage=1 00:36:11.079 --rc genhtml_legend=1 00:36:11.079 --rc geninfo_all_blocks=1 00:36:11.079 --rc geninfo_unexecuted_blocks=1 00:36:11.079 00:36:11.079 ' 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.079 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:11.080 Cannot find device "nvmf_init_br" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:11.080 Cannot find device "nvmf_init_br2" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:11.080 Cannot find device "nvmf_tgt_br" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:11.080 Cannot find device "nvmf_tgt_br2" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:11.080 Cannot find device "nvmf_init_br" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:11.080 Cannot find device "nvmf_init_br2" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:11.080 Cannot find device "nvmf_tgt_br" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:11.080 Cannot find device "nvmf_tgt_br2" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:11.080 Cannot find device "nvmf_br" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:11.080 Cannot find device "nvmf_init_if" 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:36:11.080 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:11.339 Cannot find device "nvmf_init_if2" 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:11.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:11.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:11.339 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:11.340 17:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:11.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:11.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:36:11.340 00:36:11.340 --- 10.0.0.3 ping statistics --- 00:36:11.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.340 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:11.340 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:11.340 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.131 ms 00:36:11.340 00:36:11.340 --- 10.0.0.4 ping statistics --- 00:36:11.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.340 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:11.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:36:11.340 00:36:11.340 --- 10.0.0.1 ping statistics --- 00:36:11.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.340 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:11.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:36:11.340 00:36:11.340 --- 10.0.0.2 ping statistics --- 00:36:11.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.340 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:11.340 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=126873 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 126873 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 126873 ']' 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:11.599 17:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:11.599 [2024-11-20 17:56:53.223461] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:11.599 [2024-11-20 17:56:53.224773] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:11.599 [2024-11-20 17:56:53.224857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.599 [2024-11-20 17:56:53.365109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:11.857 [2024-11-20 17:56:53.463575] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:11.857 [2024-11-20 17:56:53.463634] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:11.857 [2024-11-20 17:56:53.463646] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:11.857 [2024-11-20 17:56:53.463654] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:11.857 [2024-11-20 17:56:53.463662] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:11.857 [2024-11-20 17:56:53.464481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.857 [2024-11-20 17:56:53.464677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:11.857 [2024-11-20 17:56:53.464812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:11.857 [2024-11-20 17:56:53.464828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.857 [2024-11-20 17:56:53.569062] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:11.857 [2024-11-20 17:56:53.569209] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:11.857 [2024-11-20 17:56:53.569624] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:11.857 [2024-11-20 17:56:53.569919] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:11.857 [2024-11-20 17:56:53.570456] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:12.792 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:12.792 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:36:12.793 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:12.793 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:12.793 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:12.793 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.793 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:13.051 [2024-11-20 17:56:54.621840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.051 17:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:13.309 17:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:13.309 17:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:13.874 17:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:13.874 17:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:14.132 17:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:14.132 17:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:14.396 17:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:14.396 17:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:14.657 17:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:14.915 17:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:14.915 17:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:15.481 17:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:15.481 17:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:15.739 17:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:15.739 17:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:15.997 17:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:16.255 17:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:16.255 17:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:16.514 17:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:16.514 17:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:17.080 17:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:17.080 [2024-11-20 17:56:58.829881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:17.080 17:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:17.338 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:17.902 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:36:17.902 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:17.902 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:36:17.902 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:17.902 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:36:17.902 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:36:17.902 17:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:36:19.802 17:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:19.802 17:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:19.802 17:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:19.802 17:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:36:19.802 17:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:19.802 17:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:36:19.802 17:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:19.802 [global] 00:36:19.802 thread=1 00:36:19.802 invalidate=1 00:36:19.802 rw=write 00:36:19.802 time_based=1 00:36:19.802 runtime=1 00:36:19.802 ioengine=libaio 00:36:19.802 direct=1 00:36:19.802 bs=4096 00:36:19.802 iodepth=1 00:36:19.802 norandommap=0 00:36:19.802 numjobs=1 00:36:19.802 00:36:19.802 verify_dump=1 00:36:19.802 verify_backlog=512 00:36:19.802 verify_state_save=0 00:36:19.802 do_verify=1 00:36:19.802 verify=crc32c-intel 00:36:19.802 [job0] 00:36:19.802 filename=/dev/nvme0n1 00:36:19.802 [job1] 00:36:19.802 filename=/dev/nvme0n2 00:36:19.802 [job2] 00:36:19.802 filename=/dev/nvme0n3 00:36:19.802 [job3] 00:36:19.802 filename=/dev/nvme0n4 00:36:20.060 Could not set queue depth (nvme0n1) 00:36:20.060 Could not set queue depth (nvme0n2) 00:36:20.060 Could not set queue depth (nvme0n3) 00:36:20.060 Could not set queue depth (nvme0n4) 00:36:20.060 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:20.060 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:20.060 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:20.060 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:20.060 fio-3.35 00:36:20.060 Starting 4 threads 00:36:21.435 00:36:21.435 job0: (groupid=0, jobs=1): err= 0: pid=127167: Wed Nov 20 17:57:02 2024 00:36:21.435 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:36:21.435 slat (nsec): min=10112, max=66624, avg=18388.48, stdev=6378.70 00:36:21.435 clat (usec): min=211, max=42451, avg=473.05, stdev=1315.27 00:36:21.435 lat (usec): min=229, max=42468, avg=491.44, stdev=1315.22 00:36:21.435 clat percentiles (usec): 00:36:21.435 | 1.00th=[ 253], 5.00th=[ 363], 10.00th=[ 383], 20.00th=[ 396], 00:36:21.435 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 437], 00:36:21.435 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 486], 95.00th=[ 515], 00:36:21.435 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 2180], 99.95th=[42206], 00:36:21.435 | 99.99th=[42206] 00:36:21.435 write: IOPS=1482, BW=5930KiB/s (6072kB/s)(5936KiB/1001msec); 0 zone resets 00:36:21.435 slat (usec): min=16, max=151, avg=27.31, stdev= 7.51 00:36:21.435 clat (usec): min=142, max=450, avg=303.89, stdev=71.11 00:36:21.435 lat (usec): min=167, max=541, avg=331.20, stdev=68.65 00:36:21.435 clat percentiles (usec): 00:36:21.435 | 1.00th=[ 167], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 217], 00:36:21.435 | 30.00th=[ 237], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 343], 00:36:21.435 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 396], 00:36:21.435 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 441], 99.95th=[ 453], 00:36:21.435 | 99.99th=[ 453] 00:36:21.435 bw ( KiB/s): min= 7104, max= 7104, per=27.17%, avg=7104.00, stdev= 0.00, samples=1 00:36:21.435 iops : min= 1776, max= 1776, avg=1776.00, stdev= 0.00, samples=1 00:36:21.435 lat (usec) : 250=19.66%, 500=77.39%, 750=2.87% 00:36:21.435 lat (msec) : 4=0.04%, 50=0.04% 00:36:21.435 cpu : usr=1.50%, sys=4.20%, ctx=2509, majf=0, minf=17 00:36:21.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.435 issued rwts: total=1024,1484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:21.435 job1: (groupid=0, jobs=1): err= 0: pid=127168: Wed Nov 20 17:57:02 2024 00:36:21.435 read: IOPS=1811, BW=7245KiB/s (7419kB/s)(7252KiB/1001msec) 00:36:21.435 slat (usec): min=14, max=105, avg=21.09, stdev= 6.62 00:36:21.435 clat (usec): min=196, max=3325, avg=275.03, stdev=80.30 00:36:21.435 lat (usec): min=213, max=3362, avg=296.12, stdev=81.09 00:36:21.435 clat percentiles (usec): 00:36:21.435 | 1.00th=[ 219], 5.00th=[ 235], 10.00th=[ 245], 20.00th=[ 255], 00:36:21.435 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:36:21.435 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 310], 00:36:21.435 | 99.00th=[ 355], 99.50th=[ 515], 99.90th=[ 922], 99.95th=[ 3326], 00:36:21.435 | 99.99th=[ 3326] 00:36:21.435 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:36:21.435 slat (usec): min=16, max=150, avg=30.72, stdev= 9.45 00:36:21.435 clat (usec): min=122, max=791, avg=191.14, stdev=34.09 00:36:21.435 lat (usec): min=147, max=818, avg=221.86, stdev=37.46 00:36:21.435 clat percentiles (usec): 00:36:21.435 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 169], 00:36:21.435 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:36:21.435 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 233], 00:36:21.435 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 603], 99.95th=[ 619], 00:36:21.435 | 99.99th=[ 791] 00:36:21.435 bw ( KiB/s): min= 8192, max= 8192, per=31.34%, avg=8192.00, stdev= 0.00, samples=1 00:36:21.435 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:21.435 lat (usec) : 250=58.77%, 500=40.87%, 750=0.26%, 1000=0.08% 00:36:21.435 lat (msec) : 4=0.03% 00:36:21.435 cpu : usr=1.30%, sys=7.90%, ctx=3863, majf=0, minf=5 00:36:21.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.435 issued rwts: total=1813,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:21.435 job2: (groupid=0, jobs=1): err= 0: pid=127169: Wed Nov 20 17:57:02 2024 00:36:21.435 read: IOPS=1493, BW=5974KiB/s (6117kB/s)(5980KiB/1001msec) 00:36:21.435 slat (nsec): min=16401, max=78276, avg=29481.76, stdev=9614.46 00:36:21.435 clat (usec): min=251, max=3523, avg=343.62, stdev=91.70 00:36:21.435 lat (usec): min=269, max=3557, avg=373.10, stdev=93.29 00:36:21.435 clat percentiles (usec): 00:36:21.435 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 318], 00:36:21.435 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 347], 00:36:21.435 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 392], 00:36:21.435 | 99.00th=[ 420], 99.50th=[ 482], 99.90th=[ 1020], 99.95th=[ 3523], 00:36:21.435 | 99.99th=[ 3523] 00:36:21.435 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:36:21.435 slat (usec): min=20, max=152, avg=39.50, stdev=10.80 00:36:21.435 clat (usec): min=157, max=396, avg=241.97, stdev=29.98 00:36:21.435 lat (usec): min=200, max=548, avg=281.47, stdev=33.23 00:36:21.435 clat percentiles (usec): 00:36:21.435 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 217], 00:36:21.435 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:36:21.435 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:36:21.435 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 383], 99.95th=[ 396], 00:36:21.435 | 99.99th=[ 396] 00:36:21.435 bw ( KiB/s): min= 8192, max= 8192, per=31.34%, avg=8192.00, stdev= 0.00, samples=1 00:36:21.435 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:21.435 lat (usec) : 250=32.07%, 500=67.70%, 750=0.10%, 1000=0.07% 00:36:21.435 lat (msec) : 2=0.03%, 4=0.03% 00:36:21.435 cpu : usr=2.30%, sys=7.80%, ctx=3033, majf=0, minf=9 00:36:21.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.435 issued rwts: total=1495,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:21.435 job3: (groupid=0, jobs=1): err= 0: pid=127170: Wed Nov 20 17:57:02 2024 00:36:21.435 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:36:21.435 slat (usec): min=11, max=137, avg=22.42, stdev= 7.05 00:36:21.436 clat (usec): min=262, max=42363, avg=470.96, stdev=1312.48 00:36:21.436 lat (usec): min=291, max=42379, avg=493.38, stdev=1312.30 00:36:21.436 clat percentiles (usec): 00:36:21.436 | 1.00th=[ 338], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 396], 00:36:21.436 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 437], 00:36:21.436 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 474], 95.00th=[ 502], 00:36:21.436 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 2311], 99.95th=[42206], 00:36:21.436 | 99.99th=[42206] 00:36:21.436 write: IOPS=1472, BW=5890KiB/s (6031kB/s)(5896KiB/1001msec); 0 zone resets 00:36:21.436 slat (usec): min=15, max=152, avg=29.04, stdev= 7.64 00:36:21.436 clat (usec): min=155, max=440, avg=302.87, stdev=69.61 00:36:21.436 lat (usec): min=190, max=537, avg=331.91, stdev=64.67 00:36:21.436 clat percentiles (usec): 00:36:21.436 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 202], 20.00th=[ 219], 00:36:21.436 | 30.00th=[ 243], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 343], 00:36:21.436 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 392], 00:36:21.436 | 99.00th=[ 416], 99.50th=[ 424], 99.90th=[ 433], 99.95th=[ 441], 00:36:21.436 | 99.99th=[ 441] 00:36:21.436 bw ( KiB/s): min= 6984, max= 6984, per=26.72%, avg=6984.00, stdev= 0.00, samples=1 00:36:21.436 iops : min= 1746, max= 1746, avg=1746.00, stdev= 0.00, samples=1 00:36:21.436 lat (usec) : 250=18.45%, 500=79.46%, 750=1.96%, 1000=0.04% 00:36:21.436 lat (msec) : 4=0.04%, 50=0.04% 00:36:21.436 cpu : usr=1.20%, sys=4.90%, ctx=2499, majf=0, minf=11 00:36:21.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.436 issued rwts: total=1024,1474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:21.436 00:36:21.436 Run status group 0 (all jobs): 00:36:21.436 READ: bw=20.9MiB/s (21.9MB/s), 4092KiB/s-7245KiB/s (4190kB/s-7419kB/s), io=20.9MiB (21.9MB), run=1001-1001msec 00:36:21.436 WRITE: bw=25.5MiB/s (26.8MB/s), 5890KiB/s-8184KiB/s (6031kB/s-8380kB/s), io=25.6MiB (26.8MB), run=1001-1001msec 00:36:21.436 00:36:21.436 Disk stats (read/write): 00:36:21.436 nvme0n1: ios=1073/1066, merge=0/0, ticks=493/313, in_queue=806, util=86.24% 00:36:21.436 nvme0n2: ios=1563/1691, merge=0/0, ticks=460/346, in_queue=806, util=87.23% 00:36:21.436 nvme0n3: ios=1088/1536, merge=0/0, ticks=376/391, in_queue=767, util=88.42% 00:36:21.436 nvme0n4: ios=1024/1056, merge=0/0, ticks=473/299, in_queue=772, util=89.59% 00:36:21.436 17:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:21.436 [global] 00:36:21.436 thread=1 00:36:21.436 invalidate=1 00:36:21.436 rw=randwrite 00:36:21.436 time_based=1 00:36:21.436 runtime=1 00:36:21.436 ioengine=libaio 00:36:21.436 direct=1 00:36:21.436 bs=4096 00:36:21.436 iodepth=1 00:36:21.436 norandommap=0 00:36:21.436 numjobs=1 00:36:21.436 00:36:21.436 verify_dump=1 00:36:21.436 verify_backlog=512 00:36:21.436 verify_state_save=0 00:36:21.436 do_verify=1 00:36:21.436 verify=crc32c-intel 00:36:21.436 [job0] 00:36:21.436 filename=/dev/nvme0n1 00:36:21.436 [job1] 00:36:21.436 filename=/dev/nvme0n2 00:36:21.436 [job2] 00:36:21.436 filename=/dev/nvme0n3 00:36:21.436 [job3] 00:36:21.436 filename=/dev/nvme0n4 00:36:21.436 Could not set queue depth (nvme0n1) 00:36:21.436 Could not set queue depth (nvme0n2) 00:36:21.436 Could not set queue depth (nvme0n3) 00:36:21.436 Could not set queue depth (nvme0n4) 00:36:21.436 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:21.436 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:21.436 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:21.436 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:21.436 fio-3.35 00:36:21.436 Starting 4 threads 00:36:22.842 00:36:22.842 job0: (groupid=0, jobs=1): err= 0: pid=127225: Wed Nov 20 17:57:04 2024 00:36:22.842 read: IOPS=961, BW=3844KiB/s (3936kB/s)(3848KiB/1001msec) 00:36:22.842 slat (nsec): min=16512, max=87793, avg=32858.82, stdev=9749.67 00:36:22.842 clat (usec): min=275, max=3690, avg=539.21, stdev=139.82 00:36:22.842 lat (usec): min=296, max=3717, avg=572.07, stdev=141.86 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 322], 5.00th=[ 424], 10.00th=[ 461], 20.00th=[ 482], 00:36:22.842 | 30.00th=[ 494], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[ 529], 00:36:22.842 | 70.00th=[ 545], 80.00th=[ 578], 90.00th=[ 676], 95.00th=[ 717], 00:36:22.842 | 99.00th=[ 906], 99.50th=[ 971], 99.90th=[ 3687], 99.95th=[ 3687], 00:36:22.842 | 99.99th=[ 3687] 00:36:22.842 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:36:22.842 slat (usec): min=26, max=119, avg=48.59, stdev=11.00 00:36:22.842 clat (usec): min=176, max=921, avg=382.63, stdev=52.30 00:36:22.842 lat (usec): min=225, max=986, avg=431.22, stdev=51.58 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 258], 5.00th=[ 297], 10.00th=[ 322], 20.00th=[ 347], 00:36:22.842 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 392], 00:36:22.842 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 445], 95.00th=[ 461], 00:36:22.842 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 586], 99.95th=[ 922], 00:36:22.842 | 99.99th=[ 922] 00:36:22.842 bw ( KiB/s): min= 4096, max= 4096, per=20.78%, avg=4096.00, stdev= 0.00, samples=1 00:36:22.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:22.842 lat (usec) : 250=0.40%, 500=68.38%, 750=29.46%, 1000=1.66% 00:36:22.842 lat (msec) : 2=0.05%, 4=0.05% 00:36:22.842 cpu : usr=1.30%, sys=6.90%, ctx=1987, majf=0, minf=11 00:36:22.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 issued rwts: total=962,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:22.842 job1: (groupid=0, jobs=1): err= 0: pid=127226: Wed Nov 20 17:57:04 2024 00:36:22.842 read: IOPS=1016, BW=4068KiB/s (4166kB/s)(4072KiB/1001msec) 00:36:22.842 slat (nsec): min=13987, max=57660, avg=22561.11, stdev=4652.07 00:36:22.842 clat (usec): min=298, max=829, avg=517.11, stdev=64.79 00:36:22.842 lat (usec): min=322, max=856, avg=539.67, stdev=65.21 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 359], 5.00th=[ 424], 10.00th=[ 453], 20.00th=[ 474], 00:36:22.842 | 30.00th=[ 490], 40.00th=[ 502], 50.00th=[ 510], 60.00th=[ 523], 00:36:22.842 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[ 644], 00:36:22.842 | 99.00th=[ 734], 99.50th=[ 775], 99.90th=[ 832], 99.95th=[ 832], 00:36:22.842 | 99.99th=[ 832] 00:36:22.842 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:36:22.842 slat (usec): min=15, max=112, avg=31.69, stdev=10.52 00:36:22.842 clat (usec): min=210, max=594, avg=402.99, stdev=46.24 00:36:22.842 lat (usec): min=270, max=610, avg=434.68, stdev=46.34 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 285], 5.00th=[ 322], 10.00th=[ 338], 20.00th=[ 367], 00:36:22.842 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 416], 00:36:22.842 | 70.00th=[ 429], 80.00th=[ 441], 90.00th=[ 461], 95.00th=[ 474], 00:36:22.842 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 529], 99.95th=[ 594], 00:36:22.842 | 99.99th=[ 594] 00:36:22.842 bw ( KiB/s): min= 4096, max= 4096, per=20.78%, avg=4096.00, stdev= 0.00, samples=1 00:36:22.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:22.842 lat (usec) : 250=0.15%, 500=68.85%, 750=30.61%, 1000=0.39% 00:36:22.842 cpu : usr=1.20%, sys=4.40%, ctx=2042, majf=0, minf=23 00:36:22.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 issued rwts: total=1018,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:22.842 job2: (groupid=0, jobs=1): err= 0: pid=127233: Wed Nov 20 17:57:04 2024 00:36:22.842 read: IOPS=1016, BW=4068KiB/s (4166kB/s)(4072KiB/1001msec) 00:36:22.842 slat (nsec): min=9693, max=53445, avg=21301.59, stdev=4763.58 00:36:22.842 clat (usec): min=299, max=848, avg=518.27, stdev=65.35 00:36:22.842 lat (usec): min=329, max=877, avg=539.57, stdev=65.42 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 363], 5.00th=[ 416], 10.00th=[ 449], 20.00th=[ 478], 00:36:22.842 | 30.00th=[ 494], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[ 529], 00:36:22.842 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[ 627], 00:36:22.842 | 99.00th=[ 742], 99.50th=[ 791], 99.90th=[ 832], 99.95th=[ 848], 00:36:22.842 | 99.99th=[ 848] 00:36:22.842 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:36:22.842 slat (nsec): min=19479, max=86864, avg=36020.00, stdev=8981.77 00:36:22.842 clat (usec): min=195, max=577, avg=398.31, stdev=47.57 00:36:22.842 lat (usec): min=251, max=604, avg=434.33, stdev=45.16 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 277], 5.00th=[ 314], 10.00th=[ 334], 20.00th=[ 363], 00:36:22.842 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 412], 00:36:22.842 | 70.00th=[ 424], 80.00th=[ 437], 90.00th=[ 457], 95.00th=[ 469], 00:36:22.842 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 578], 00:36:22.842 | 99.99th=[ 578] 00:36:22.842 bw ( KiB/s): min= 4096, max= 4096, per=20.78%, avg=4096.00, stdev= 0.00, samples=1 00:36:22.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:22.842 lat (usec) : 250=0.34%, 500=67.63%, 750=31.64%, 1000=0.39% 00:36:22.842 cpu : usr=1.60%, sys=4.30%, ctx=2042, majf=0, minf=7 00:36:22.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 issued rwts: total=1018,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:22.842 job3: (groupid=0, jobs=1): err= 0: pid=127234: Wed Nov 20 17:57:04 2024 00:36:22.842 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:36:22.842 slat (nsec): min=13593, max=61756, avg=20387.91, stdev=5199.35 00:36:22.842 clat (usec): min=244, max=806, avg=306.38, stdev=33.92 00:36:22.842 lat (usec): min=262, max=825, avg=326.77, stdev=34.51 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:36:22.842 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:36:22.842 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 347], 00:36:22.842 | 99.00th=[ 379], 99.50th=[ 506], 99.90th=[ 807], 99.95th=[ 807], 00:36:22.842 | 99.99th=[ 807] 00:36:22.842 write: IOPS=1858, BW=7433KiB/s (7611kB/s)(7440KiB/1001msec); 0 zone resets 00:36:22.842 slat (usec): min=23, max=120, avg=31.36, stdev= 7.98 00:36:22.842 clat (usec): min=168, max=456, avg=231.68, stdev=23.87 00:36:22.842 lat (usec): min=193, max=484, avg=263.04, stdev=26.48 00:36:22.842 clat percentiles (usec): 00:36:22.842 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:36:22.842 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:36:22.842 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:36:22.842 | 99.00th=[ 297], 99.50th=[ 355], 99.90th=[ 437], 99.95th=[ 457], 00:36:22.842 | 99.99th=[ 457] 00:36:22.842 bw ( KiB/s): min= 8192, max= 8192, per=41.57%, avg=8192.00, stdev= 0.00, samples=1 00:36:22.842 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:22.842 lat (usec) : 250=45.29%, 500=54.45%, 750=0.21%, 1000=0.06% 00:36:22.842 cpu : usr=1.60%, sys=7.00%, ctx=3396, majf=0, minf=7 00:36:22.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.842 issued rwts: total=1536,1860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:22.842 00:36:22.842 Run status group 0 (all jobs): 00:36:22.842 READ: bw=17.7MiB/s (18.6MB/s), 3844KiB/s-6138KiB/s (3936kB/s-6285kB/s), io=17.7MiB (18.6MB), run=1001-1001msec 00:36:22.842 WRITE: bw=19.2MiB/s (20.2MB/s), 4092KiB/s-7433KiB/s (4190kB/s-7611kB/s), io=19.3MiB (20.2MB), run=1001-1001msec 00:36:22.842 00:36:22.842 Disk stats (read/write): 00:36:22.842 nvme0n1: ios=783/1024, merge=0/0, ticks=428/406, in_queue=834, util=88.58% 00:36:22.842 nvme0n2: ios=818/1024, merge=0/0, ticks=417/398, in_queue=815, util=88.99% 00:36:22.842 nvme0n3: ios=788/1024, merge=0/0, ticks=442/417, in_queue=859, util=90.01% 00:36:22.842 nvme0n4: ios=1410/1536, merge=0/0, ticks=447/378, in_queue=825, util=89.76% 00:36:22.842 17:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:22.842 [global] 00:36:22.842 thread=1 00:36:22.842 invalidate=1 00:36:22.842 rw=write 00:36:22.842 time_based=1 00:36:22.842 runtime=1 00:36:22.842 ioengine=libaio 00:36:22.842 direct=1 00:36:22.842 bs=4096 00:36:22.842 iodepth=128 00:36:22.842 norandommap=0 00:36:22.842 numjobs=1 00:36:22.842 00:36:22.842 verify_dump=1 00:36:22.842 verify_backlog=512 00:36:22.842 verify_state_save=0 00:36:22.842 do_verify=1 00:36:22.842 verify=crc32c-intel 00:36:22.842 [job0] 00:36:22.842 filename=/dev/nvme0n1 00:36:22.842 [job1] 00:36:22.842 filename=/dev/nvme0n2 00:36:22.842 [job2] 00:36:22.842 filename=/dev/nvme0n3 00:36:22.842 [job3] 00:36:22.842 filename=/dev/nvme0n4 00:36:22.842 Could not set queue depth (nvme0n1) 00:36:22.842 Could not set queue depth (nvme0n2) 00:36:22.842 Could not set queue depth (nvme0n3) 00:36:22.842 Could not set queue depth (nvme0n4) 00:36:22.842 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:22.843 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:22.843 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:22.843 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:22.843 fio-3.35 00:36:22.843 Starting 4 threads 00:36:24.214 00:36:24.214 job0: (groupid=0, jobs=1): err= 0: pid=127287: Wed Nov 20 17:57:05 2024 00:36:24.214 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:36:24.214 slat (usec): min=5, max=5205, avg=117.24, stdev=477.59 00:36:24.214 clat (usec): min=11088, max=20360, avg=14998.29, stdev=1296.72 00:36:24.214 lat (usec): min=11450, max=20391, avg=15115.53, stdev=1339.94 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[11994], 5.00th=[12911], 10.00th=[13435], 20.00th=[13960], 00:36:24.214 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:36:24.214 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16712], 95.00th=[17171], 00:36:24.214 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19530], 99.95th=[19792], 00:36:24.214 | 99.99th=[20317] 00:36:24.214 write: IOPS=4244, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec); 0 zone resets 00:36:24.214 slat (usec): min=9, max=5697, avg=113.09, stdev=406.39 00:36:24.214 clat (usec): min=4183, max=22307, avg=15343.32, stdev=1722.04 00:36:24.214 lat (usec): min=4952, max=22358, avg=15456.40, stdev=1711.76 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[ 9634], 5.00th=[12649], 10.00th=[13566], 20.00th=[14222], 00:36:24.214 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:36:24.214 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17433], 95.00th=[17695], 00:36:24.214 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20317], 99.95th=[20579], 00:36:24.214 | 99.99th=[22414] 00:36:24.214 bw ( KiB/s): min=16561, max=16584, per=34.89%, avg=16572.50, stdev=16.26, samples=2 00:36:24.214 iops : min= 4140, max= 4146, avg=4143.00, stdev= 4.24, samples=2 00:36:24.214 lat (msec) : 10=0.65%, 20=99.27%, 50=0.08% 00:36:24.214 cpu : usr=4.08%, sys=12.95%, ctx=612, majf=0, minf=5 00:36:24.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:24.214 issued rwts: total=4096,4266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:24.214 job1: (groupid=0, jobs=1): err= 0: pid=127288: Wed Nov 20 17:57:05 2024 00:36:24.214 read: IOPS=1682, BW=6730KiB/s (6892kB/s)(6764KiB/1005msec) 00:36:24.214 slat (usec): min=8, max=8151, avg=275.90, stdev=986.51 00:36:24.214 clat (usec): min=4381, max=48323, avg=35093.08, stdev=6594.29 00:36:24.214 lat (usec): min=4408, max=48371, avg=35368.98, stdev=6601.58 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[11731], 5.00th=[27132], 10.00th=[29492], 20.00th=[31327], 00:36:24.214 | 30.00th=[32637], 40.00th=[33817], 50.00th=[34866], 60.00th=[36439], 00:36:24.214 | 70.00th=[38011], 80.00th=[40109], 90.00th=[43779], 95.00th=[45876], 00:36:24.214 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47449], 99.95th=[48497], 00:36:24.214 | 99.99th=[48497] 00:36:24.214 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:36:24.214 slat (usec): min=7, max=10036, avg=251.46, stdev=1063.30 00:36:24.214 clat (usec): min=21613, max=43513, avg=32568.78, stdev=3714.54 00:36:24.214 lat (usec): min=22111, max=43555, avg=32820.24, stdev=3681.83 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[24249], 5.00th=[26084], 10.00th=[27657], 20.00th=[29754], 00:36:24.214 | 30.00th=[31065], 40.00th=[31327], 50.00th=[32375], 60.00th=[33424], 00:36:24.214 | 70.00th=[34341], 80.00th=[35390], 90.00th=[36439], 95.00th=[39060], 00:36:24.214 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:36:24.214 | 99.99th=[43254] 00:36:24.214 bw ( KiB/s): min= 8192, max= 8192, per=17.25%, avg=8192.00, stdev= 0.00, samples=2 00:36:24.214 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:36:24.214 lat (msec) : 10=0.37%, 20=1.20%, 50=98.42% 00:36:24.214 cpu : usr=2.39%, sys=6.57%, ctx=529, majf=0, minf=9 00:36:24.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:36:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:24.214 issued rwts: total=1691,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:24.214 job2: (groupid=0, jobs=1): err= 0: pid=127289: Wed Nov 20 17:57:05 2024 00:36:24.214 read: IOPS=3087, BW=12.1MiB/s (12.6MB/s)(12.1MiB/1006msec) 00:36:24.214 slat (usec): min=8, max=5602, avg=143.76, stdev=627.36 00:36:24.214 clat (usec): min=4378, max=25302, avg=18435.16, stdev=2408.04 00:36:24.214 lat (usec): min=5554, max=25384, avg=18578.92, stdev=2435.42 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[ 9372], 5.00th=[15139], 10.00th=[15533], 20.00th=[16450], 00:36:24.214 | 30.00th=[17433], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:36:24.214 | 70.00th=[19530], 80.00th=[20055], 90.00th=[21365], 95.00th=[21627], 00:36:24.214 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24511], 99.95th=[25035], 00:36:24.214 | 99.99th=[25297] 00:36:24.214 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:36:24.214 slat (usec): min=7, max=10382, avg=146.52, stdev=670.61 00:36:24.214 clat (usec): min=10475, max=43402, avg=19217.99, stdev=3738.79 00:36:24.214 lat (usec): min=10515, max=43461, avg=19364.52, stdev=3775.79 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[12125], 5.00th=[15401], 10.00th=[15795], 20.00th=[16909], 00:36:24.214 | 30.00th=[17957], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:36:24.214 | 70.00th=[19530], 80.00th=[20055], 90.00th=[22152], 95.00th=[26346], 00:36:24.214 | 99.00th=[33817], 99.50th=[33817], 99.90th=[35390], 99.95th=[41681], 00:36:24.214 | 99.99th=[43254] 00:36:24.214 bw ( KiB/s): min=13835, max=14112, per=29.42%, avg=13973.50, stdev=195.87, samples=2 00:36:24.214 iops : min= 3458, max= 3528, avg=3493.00, stdev=49.50, samples=2 00:36:24.214 lat (msec) : 10=0.49%, 20=78.65%, 50=20.85% 00:36:24.214 cpu : usr=3.28%, sys=10.55%, ctx=415, majf=0, minf=14 00:36:24.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:24.214 issued rwts: total=3106,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:24.214 job3: (groupid=0, jobs=1): err= 0: pid=127290: Wed Nov 20 17:57:05 2024 00:36:24.214 read: IOPS=1848, BW=7395KiB/s (7573kB/s)(7432KiB/1005msec) 00:36:24.214 slat (usec): min=4, max=8673, avg=273.17, stdev=1012.11 00:36:24.214 clat (usec): min=2153, max=47908, avg=33760.39, stdev=6567.93 00:36:24.214 lat (usec): min=4571, max=47926, avg=34033.57, stdev=6577.65 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[ 6783], 5.00th=[24773], 10.00th=[27395], 20.00th=[30278], 00:36:24.214 | 30.00th=[31851], 40.00th=[32900], 50.00th=[34341], 60.00th=[34866], 00:36:24.214 | 70.00th=[35914], 80.00th=[39060], 90.00th=[41681], 95.00th=[43779], 00:36:24.214 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:36:24.214 | 99.99th=[47973] 00:36:24.214 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:36:24.214 slat (usec): min=10, max=10177, avg=231.96, stdev=977.11 00:36:24.214 clat (usec): min=15778, max=44027, avg=31151.88, stdev=5605.77 00:36:24.214 lat (usec): min=15858, max=44053, avg=31383.84, stdev=5594.23 00:36:24.214 clat percentiles (usec): 00:36:24.214 | 1.00th=[17171], 5.00th=[19006], 10.00th=[21890], 20.00th=[27657], 00:36:24.214 | 30.00th=[30016], 40.00th=[31065], 50.00th=[31589], 60.00th=[32900], 00:36:24.214 | 70.00th=[34341], 80.00th=[35390], 90.00th=[36963], 95.00th=[39060], 00:36:24.214 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:36:24.214 | 99.99th=[43779] 00:36:24.214 bw ( KiB/s): min= 8192, max= 8208, per=17.26%, avg=8200.00, stdev=11.31, samples=2 00:36:24.214 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:36:24.214 lat (msec) : 4=0.03%, 10=0.82%, 20=4.61%, 50=94.55% 00:36:24.214 cpu : usr=2.99%, sys=6.18%, ctx=558, majf=0, minf=5 00:36:24.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:36:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:24.215 issued rwts: total=1858,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:24.215 00:36:24.215 Run status group 0 (all jobs): 00:36:24.215 READ: bw=41.7MiB/s (43.8MB/s), 6730KiB/s-15.9MiB/s (6892kB/s-16.7MB/s), io=42.0MiB (44.0MB), run=1005-1006msec 00:36:24.215 WRITE: bw=46.4MiB/s (48.6MB/s), 8151KiB/s-16.6MiB/s (8347kB/s-17.4MB/s), io=46.7MiB (48.9MB), run=1005-1006msec 00:36:24.215 00:36:24.215 Disk stats (read/write): 00:36:24.215 nvme0n1: ios=3568/3584, merge=0/0, ticks=17002/16554, in_queue=33556, util=87.68% 00:36:24.215 nvme0n2: ios=1581/1584, merge=0/0, ticks=13642/12680, in_queue=26322, util=88.15% 00:36:24.215 nvme0n3: ios=2602/3072, merge=0/0, ticks=15471/17653, in_queue=33124, util=88.67% 00:36:24.215 nvme0n4: ios=1536/1762, merge=0/0, ticks=13376/12219, in_queue=25595, util=89.23% 00:36:24.215 17:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:24.215 [global] 00:36:24.215 thread=1 00:36:24.215 invalidate=1 00:36:24.215 rw=randwrite 00:36:24.215 time_based=1 00:36:24.215 runtime=1 00:36:24.215 ioengine=libaio 00:36:24.215 direct=1 00:36:24.215 bs=4096 00:36:24.215 iodepth=128 00:36:24.215 norandommap=0 00:36:24.215 numjobs=1 00:36:24.215 00:36:24.215 verify_dump=1 00:36:24.215 verify_backlog=512 00:36:24.215 verify_state_save=0 00:36:24.215 do_verify=1 00:36:24.215 verify=crc32c-intel 00:36:24.215 [job0] 00:36:24.215 filename=/dev/nvme0n1 00:36:24.215 [job1] 00:36:24.215 filename=/dev/nvme0n2 00:36:24.215 [job2] 00:36:24.215 filename=/dev/nvme0n3 00:36:24.215 [job3] 00:36:24.215 filename=/dev/nvme0n4 00:36:24.215 Could not set queue depth (nvme0n1) 00:36:24.215 Could not set queue depth (nvme0n2) 00:36:24.215 Could not set queue depth (nvme0n3) 00:36:24.215 Could not set queue depth (nvme0n4) 00:36:24.215 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:24.215 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:24.215 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:24.215 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:24.215 fio-3.35 00:36:24.215 Starting 4 threads 00:36:25.728 00:36:25.728 job0: (groupid=0, jobs=1): err= 0: pid=127343: Wed Nov 20 17:57:07 2024 00:36:25.728 read: IOPS=3223, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1003msec) 00:36:25.728 slat (usec): min=8, max=4666, avg=145.07, stdev=646.16 00:36:25.728 clat (usec): min=700, max=23331, avg=18737.74, stdev=2198.05 00:36:25.728 lat (usec): min=4475, max=24079, avg=18882.81, stdev=2145.57 00:36:25.728 clat percentiles (usec): 00:36:25.728 | 1.00th=[ 5145], 5.00th=[15664], 10.00th=[16712], 20.00th=[17957], 00:36:25.728 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19268], 60.00th=[19268], 00:36:25.728 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20317], 95.00th=[21103], 00:36:25.728 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23200], 99.95th=[23462], 00:36:25.728 | 99.99th=[23462] 00:36:25.729 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:36:25.729 slat (usec): min=11, max=4548, avg=139.17, stdev=595.13 00:36:25.729 clat (usec): min=13448, max=22743, avg=18335.65, stdev=1787.35 00:36:25.729 lat (usec): min=13568, max=22765, avg=18474.82, stdev=1753.49 00:36:25.729 clat percentiles (usec): 00:36:25.729 | 1.00th=[14353], 5.00th=[15008], 10.00th=[15664], 20.00th=[16712], 00:36:25.729 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:36:25.729 | 70.00th=[19268], 80.00th=[20055], 90.00th=[20579], 95.00th=[20841], 00:36:25.729 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22676], 99.95th=[22676], 00:36:25.729 | 99.99th=[22676] 00:36:25.729 bw ( KiB/s): min=13696, max=14976, per=27.54%, avg=14336.00, stdev=905.10, samples=2 00:36:25.729 iops : min= 3424, max= 3744, avg=3584.00, stdev=226.27, samples=2 00:36:25.729 lat (usec) : 750=0.01% 00:36:25.729 lat (msec) : 10=0.78%, 20=80.08%, 50=19.13% 00:36:25.729 cpu : usr=3.29%, sys=11.58%, ctx=378, majf=0, minf=4 00:36:25.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:25.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:25.729 issued rwts: total=3233,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:25.729 job1: (groupid=0, jobs=1): err= 0: pid=127344: Wed Nov 20 17:57:07 2024 00:36:25.729 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:36:25.729 slat (usec): min=7, max=4705, avg=153.94, stdev=539.79 00:36:25.729 clat (usec): min=15678, max=24050, avg=19690.25, stdev=1332.46 00:36:25.729 lat (usec): min=16135, max=24068, avg=19844.19, stdev=1270.50 00:36:25.729 clat percentiles (usec): 00:36:25.729 | 1.00th=[16909], 5.00th=[17171], 10.00th=[17957], 20.00th=[18744], 00:36:25.729 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:36:25.729 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21365], 95.00th=[21890], 00:36:25.729 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23987], 99.95th=[23987], 00:36:25.729 | 99.99th=[23987] 00:36:25.729 write: IOPS=3338, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1004msec); 0 zone resets 00:36:25.729 slat (usec): min=13, max=5408, avg=147.07, stdev=477.09 00:36:25.729 clat (usec): min=2838, max=24409, avg=19709.61, stdev=2296.59 00:36:25.729 lat (usec): min=3962, max=24453, avg=19856.67, stdev=2273.80 00:36:25.729 clat percentiles (usec): 00:36:25.729 | 1.00th=[ 8717], 5.00th=[16581], 10.00th=[17695], 20.00th=[18744], 00:36:25.729 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:36:25.729 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21890], 95.00th=[22676], 00:36:25.729 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24511], 99.95th=[24511], 00:36:25.729 | 99.99th=[24511] 00:36:25.729 bw ( KiB/s): min=12288, max=13512, per=24.78%, avg=12900.00, stdev=865.50, samples=2 00:36:25.729 iops : min= 3072, max= 3378, avg=3225.00, stdev=216.37, samples=2 00:36:25.729 lat (msec) : 4=0.05%, 10=0.59%, 20=54.36%, 50=45.00% 00:36:25.729 cpu : usr=3.19%, sys=12.06%, ctx=531, majf=0, minf=5 00:36:25.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:36:25.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:25.729 issued rwts: total=3072,3352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:25.729 job2: (groupid=0, jobs=1): err= 0: pid=127345: Wed Nov 20 17:57:07 2024 00:36:25.729 read: IOPS=2748, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1005msec) 00:36:25.729 slat (usec): min=8, max=7008, avg=175.76, stdev=741.80 00:36:25.729 clat (usec): min=2180, max=31397, avg=21553.25, stdev=3262.72 00:36:25.729 lat (usec): min=5564, max=31429, avg=21729.01, stdev=3317.67 00:36:25.729 clat percentiles (usec): 00:36:25.729 | 1.00th=[ 6194], 5.00th=[17171], 10.00th=[17957], 20.00th=[20055], 00:36:25.729 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:36:25.729 | 70.00th=[22676], 80.00th=[23987], 90.00th=[25297], 95.00th=[26608], 00:36:25.729 | 99.00th=[28181], 99.50th=[29754], 99.90th=[31065], 99.95th=[31327], 00:36:25.729 | 99.99th=[31327] 00:36:25.729 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:36:25.729 slat (usec): min=11, max=6259, avg=158.65, stdev=572.60 00:36:25.729 clat (usec): min=14558, max=31049, avg=21859.39, stdev=2336.59 00:36:25.729 lat (usec): min=14666, max=31081, avg=22018.04, stdev=2366.67 00:36:25.729 clat percentiles (usec): 00:36:25.729 | 1.00th=[16581], 5.00th=[19006], 10.00th=[19530], 20.00th=[20055], 00:36:25.729 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:36:25.729 | 70.00th=[22938], 80.00th=[23987], 90.00th=[24511], 95.00th=[26346], 00:36:25.729 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30540], 99.95th=[31065], 00:36:25.729 | 99.99th=[31065] 00:36:25.729 bw ( KiB/s): min=12288, max=12288, per=23.60%, avg=12288.00, stdev= 0.00, samples=2 00:36:25.729 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:36:25.729 lat (msec) : 4=0.02%, 10=0.72%, 20=19.52%, 50=79.74% 00:36:25.729 cpu : usr=2.99%, sys=10.56%, ctx=415, majf=0, minf=3 00:36:25.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:36:25.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:25.729 issued rwts: total=2762,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:25.729 job3: (groupid=0, jobs=1): err= 0: pid=127346: Wed Nov 20 17:57:07 2024 00:36:25.729 read: IOPS=2842, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1002msec) 00:36:25.729 slat (usec): min=6, max=5222, avg=169.70, stdev=719.94 00:36:25.729 clat (usec): min=1101, max=27344, avg=21469.89, stdev=2729.50 00:36:25.729 lat (usec): min=1123, max=31146, avg=21639.59, stdev=2694.99 00:36:25.729 clat percentiles (usec): 00:36:25.729 | 1.00th=[ 5735], 5.00th=[18220], 10.00th=[19530], 20.00th=[20317], 00:36:25.729 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21890], 60.00th=[22152], 00:36:25.729 | 70.00th=[22414], 80.00th=[22938], 90.00th=[23987], 95.00th=[25035], 00:36:25.729 | 99.00th=[25822], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:36:25.729 | 99.99th=[27395] 00:36:25.729 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:36:25.729 slat (usec): min=9, max=5592, avg=159.11, stdev=750.01 00:36:25.729 clat (usec): min=15437, max=26153, avg=21137.26, stdev=1859.20 00:36:25.729 lat (usec): min=15861, max=26193, avg=21296.37, stdev=1786.78 00:36:25.729 clat percentiles (usec): 00:36:25.729 | 1.00th=[16450], 5.00th=[17433], 10.00th=[18220], 20.00th=[19792], 00:36:25.729 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:36:25.729 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23200], 95.00th=[24249], 00:36:25.729 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25560], 99.95th=[26084], 00:36:25.729 | 99.99th=[26084] 00:36:25.729 bw ( KiB/s): min=12288, max=12288, per=23.60%, avg=12288.00, stdev= 0.00, samples=2 00:36:25.729 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:36:25.729 lat (msec) : 2=0.05%, 10=0.54%, 20=18.67%, 50=80.74% 00:36:25.729 cpu : usr=2.70%, sys=9.79%, ctx=288, majf=0, minf=7 00:36:25.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:36:25.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:25.729 issued rwts: total=2848,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:25.729 00:36:25.729 Run status group 0 (all jobs): 00:36:25.729 READ: bw=46.3MiB/s (48.6MB/s), 10.7MiB/s-12.6MiB/s (11.3MB/s-13.2MB/s), io=46.5MiB (48.8MB), run=1002-1005msec 00:36:25.729 WRITE: bw=50.8MiB/s (53.3MB/s), 11.9MiB/s-14.0MiB/s (12.5MB/s-14.6MB/s), io=51.1MiB (53.6MB), run=1002-1005msec 00:36:25.729 00:36:25.729 Disk stats (read/write): 00:36:25.729 nvme0n1: ios=2824/3072, merge=0/0, ticks=12743/12690, in_queue=25433, util=88.98% 00:36:25.729 nvme0n2: ios=2609/2928, merge=0/0, ticks=12433/13530, in_queue=25963, util=89.08% 00:36:25.729 nvme0n3: ios=2527/2560, merge=0/0, ticks=17478/16519, in_queue=33997, util=89.92% 00:36:25.729 nvme0n4: ios=2516/2560, merge=0/0, ticks=13329/12337, in_queue=25666, util=89.73% 00:36:25.729 17:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:25.729 17:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=127359 00:36:25.729 17:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:25.729 17:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:25.729 [global] 00:36:25.729 thread=1 00:36:25.729 invalidate=1 00:36:25.729 rw=read 00:36:25.729 time_based=1 00:36:25.729 runtime=10 00:36:25.729 ioengine=libaio 00:36:25.729 direct=1 00:36:25.729 bs=4096 00:36:25.729 iodepth=1 00:36:25.729 norandommap=1 00:36:25.729 numjobs=1 00:36:25.729 00:36:25.729 [job0] 00:36:25.729 filename=/dev/nvme0n1 00:36:25.729 [job1] 00:36:25.729 filename=/dev/nvme0n2 00:36:25.729 [job2] 00:36:25.730 filename=/dev/nvme0n3 00:36:25.730 [job3] 00:36:25.730 filename=/dev/nvme0n4 00:36:25.730 Could not set queue depth (nvme0n1) 00:36:25.730 Could not set queue depth (nvme0n2) 00:36:25.730 Could not set queue depth (nvme0n3) 00:36:25.730 Could not set queue depth (nvme0n4) 00:36:25.730 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.730 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.730 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.730 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.730 fio-3.35 00:36:25.730 Starting 4 threads 00:36:29.026 17:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:29.026 fio: pid=127408, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:29.026 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=30408704, buflen=4096 00:36:29.026 17:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:29.284 fio: pid=127407, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:29.284 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37396480, buflen=4096 00:36:29.284 17:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:29.284 17:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:29.542 fio: pid=127405, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:29.542 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=40075264, buflen=4096 00:36:29.542 17:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:29.542 17:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:29.801 fio: pid=127406, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:29.801 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61960192, buflen=4096 00:36:29.801 00:36:29.801 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=127405: Wed Nov 20 17:57:11 2024 00:36:29.801 read: IOPS=2698, BW=10.5MiB/s (11.1MB/s)(38.2MiB/3626msec) 00:36:29.801 slat (usec): min=7, max=8924, avg=19.51, stdev=150.21 00:36:29.801 clat (usec): min=191, max=3553, avg=349.47, stdev=95.74 00:36:29.801 lat (usec): min=208, max=9522, avg=368.98, stdev=179.98 00:36:29.801 clat percentiles (usec): 00:36:29.801 | 1.00th=[ 221], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 269], 00:36:29.801 | 30.00th=[ 314], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 379], 00:36:29.801 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 429], 00:36:29.801 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 898], 99.95th=[ 2704], 00:36:29.801 | 99.99th=[ 3556] 00:36:29.801 bw ( KiB/s): min= 9576, max=14064, per=25.87%, avg=10769.71, stdev=1676.94, samples=7 00:36:29.801 iops : min= 2394, max= 3516, avg=2692.43, stdev=419.23, samples=7 00:36:29.801 lat (usec) : 250=12.41%, 500=86.68%, 750=0.79%, 1000=0.02% 00:36:29.801 lat (msec) : 2=0.02%, 4=0.07% 00:36:29.801 cpu : usr=0.72%, sys=3.83%, ctx=10282, majf=0, minf=1 00:36:29.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 issued rwts: total=9785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.801 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=127406: Wed Nov 20 17:57:11 2024 00:36:29.801 read: IOPS=3796, BW=14.8MiB/s (15.5MB/s)(59.1MiB/3985msec) 00:36:29.801 slat (usec): min=11, max=10708, avg=21.57, stdev=160.10 00:36:29.801 clat (usec): min=171, max=7459, avg=240.28, stdev=100.08 00:36:29.801 lat (usec): min=188, max=10998, avg=261.85, stdev=189.52 00:36:29.801 clat percentiles (usec): 00:36:29.801 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:36:29.801 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:36:29.801 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 281], 95.00th=[ 343], 00:36:29.801 | 99.00th=[ 424], 99.50th=[ 498], 99.90th=[ 1057], 99.95th=[ 2057], 00:36:29.801 | 99.99th=[ 4047] 00:36:29.801 bw ( KiB/s): min=11176, max=16392, per=36.33%, avg=15123.43, stdev=1966.68, samples=7 00:36:29.801 iops : min= 2794, max= 4098, avg=3780.86, stdev=491.67, samples=7 00:36:29.801 lat (usec) : 250=85.05%, 500=14.45%, 750=0.34%, 1000=0.04% 00:36:29.801 lat (msec) : 2=0.07%, 4=0.04%, 10=0.01% 00:36:29.801 cpu : usr=0.93%, sys=5.65%, ctx=15143, majf=0, minf=1 00:36:29.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 issued rwts: total=15128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.801 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=127407: Wed Nov 20 17:57:11 2024 00:36:29.801 read: IOPS=2748, BW=10.7MiB/s (11.3MB/s)(35.7MiB/3322msec) 00:36:29.801 slat (usec): min=7, max=7660, avg=19.66, stdev=109.95 00:36:29.801 clat (usec): min=205, max=4177, avg=342.41, stdev=86.13 00:36:29.801 lat (usec): min=219, max=7923, avg=362.07, stdev=138.49 00:36:29.801 clat percentiles (usec): 00:36:29.801 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 255], 00:36:29.801 | 30.00th=[ 297], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 375], 00:36:29.801 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 429], 00:36:29.801 | 99.00th=[ 490], 99.50th=[ 519], 99.90th=[ 603], 99.95th=[ 873], 00:36:29.801 | 99.99th=[ 4178] 00:36:29.801 bw ( KiB/s): min= 9656, max=14672, per=26.26%, avg=10929.33, stdev=1955.08, samples=6 00:36:29.801 iops : min= 2414, max= 3668, avg=2732.33, stdev=488.77, samples=6 00:36:29.801 lat (usec) : 250=17.52%, 500=81.79%, 750=0.62%, 1000=0.01% 00:36:29.801 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01% 00:36:29.801 cpu : usr=0.66%, sys=4.43%, ctx=9556, majf=0, minf=2 00:36:29.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 issued rwts: total=9131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.801 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=127408: Wed Nov 20 17:57:11 2024 00:36:29.801 read: IOPS=2476, BW=9905KiB/s (10.1MB/s)(29.0MiB/2998msec) 00:36:29.801 slat (usec): min=7, max=197, avg=18.15, stdev= 7.74 00:36:29.801 clat (usec): min=198, max=7412, avg=383.80, stdev=118.56 00:36:29.801 lat (usec): min=216, max=7433, avg=401.95, stdev=118.77 00:36:29.801 clat percentiles (usec): 00:36:29.801 | 1.00th=[ 243], 5.00th=[ 322], 10.00th=[ 347], 20.00th=[ 363], 00:36:29.801 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 388], 00:36:29.801 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 445], 00:36:29.801 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 848], 99.95th=[ 3523], 00:36:29.801 | 99.99th=[ 7439] 00:36:29.801 bw ( KiB/s): min= 9640, max=10136, per=23.93%, avg=9958.40, stdev=187.57, samples=5 00:36:29.801 iops : min= 2410, max= 2534, avg=2489.60, stdev=46.89, samples=5 00:36:29.801 lat (usec) : 250=1.19%, 500=97.45%, 750=1.23%, 1000=0.03% 00:36:29.801 lat (msec) : 2=0.03%, 4=0.04%, 10=0.03% 00:36:29.801 cpu : usr=0.83%, sys=3.67%, ctx=7674, majf=0, minf=1 00:36:29.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.801 issued rwts: total=7425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.801 00:36:29.801 Run status group 0 (all jobs): 00:36:29.801 READ: bw=40.6MiB/s (42.6MB/s), 9905KiB/s-14.8MiB/s (10.1MB/s-15.5MB/s), io=162MiB (170MB), run=2998-3985msec 00:36:29.801 00:36:29.801 Disk stats (read/write): 00:36:29.801 nvme0n1: ios=9767/0, merge=0/0, ticks=3347/0, in_queue=3347, util=95.86% 00:36:29.801 nvme0n2: ios=14609/0, merge=0/0, ticks=3632/0, in_queue=3632, util=95.84% 00:36:29.801 nvme0n3: ios=8507/0, merge=0/0, ticks=2896/0, in_queue=2896, util=96.34% 00:36:29.801 nvme0n4: ios=7106/0, merge=0/0, ticks=2710/0, in_queue=2710, util=96.59% 00:36:29.801 17:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:29.801 17:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:30.367 17:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:30.367 17:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:30.625 17:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:30.625 17:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:30.882 17:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:30.882 17:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:31.448 17:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:31.448 17:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 127359 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:31.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:31.706 nvmf hotplug test: fio failed as expected 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:31.706 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:31.965 rmmod nvme_tcp 00:36:31.965 rmmod nvme_fabrics 00:36:31.965 rmmod nvme_keyring 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 126873 ']' 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 126873 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 126873 ']' 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 126873 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126873 00:36:31.965 killing process with pid 126873 00:36:31.965 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:31.966 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:31.966 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126873' 00:36:31.966 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 126873 00:36:31.966 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 126873 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:32.224 17:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.482 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:36:32.482 ************************************ 00:36:32.483 END TEST nvmf_fio_target 00:36:32.483 ************************************ 00:36:32.483 00:36:32.483 real 0m21.694s 00:36:32.483 user 1m5.416s 00:36:32.483 sys 0m10.227s 00:36:32.483 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:32.483 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:32.483 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:32.483 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:32.483 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:32.483 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:32.483 ************************************ 00:36:32.483 START TEST nvmf_bdevio 00:36:32.483 ************************************ 00:36:32.483 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:32.742 * Looking for test storage... 00:36:32.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.742 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:32.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.742 --rc genhtml_branch_coverage=1 00:36:32.742 --rc genhtml_function_coverage=1 00:36:32.742 --rc genhtml_legend=1 00:36:32.743 --rc geninfo_all_blocks=1 00:36:32.743 --rc geninfo_unexecuted_blocks=1 00:36:32.743 00:36:32.743 ' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:32.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.743 --rc genhtml_branch_coverage=1 00:36:32.743 --rc genhtml_function_coverage=1 00:36:32.743 --rc genhtml_legend=1 00:36:32.743 --rc geninfo_all_blocks=1 00:36:32.743 --rc geninfo_unexecuted_blocks=1 00:36:32.743 00:36:32.743 ' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:32.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.743 --rc genhtml_branch_coverage=1 00:36:32.743 --rc genhtml_function_coverage=1 00:36:32.743 --rc genhtml_legend=1 00:36:32.743 --rc geninfo_all_blocks=1 00:36:32.743 --rc geninfo_unexecuted_blocks=1 00:36:32.743 00:36:32.743 ' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:32.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.743 --rc genhtml_branch_coverage=1 00:36:32.743 --rc genhtml_function_coverage=1 00:36:32.743 --rc genhtml_legend=1 00:36:32.743 --rc geninfo_all_blocks=1 00:36:32.743 --rc geninfo_unexecuted_blocks=1 00:36:32.743 00:36:32.743 ' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:32.743 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:32.744 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:32.744 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:32.744 Cannot find device "nvmf_init_br" 00:36:32.744 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:36:32.744 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:32.744 Cannot find device "nvmf_init_br2" 00:36:32.744 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:36:32.744 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:33.002 Cannot find device "nvmf_tgt_br" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:33.002 Cannot find device "nvmf_tgt_br2" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:33.002 Cannot find device "nvmf_init_br" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:33.002 Cannot find device "nvmf_init_br2" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:33.002 Cannot find device "nvmf_tgt_br" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:33.002 Cannot find device "nvmf_tgt_br2" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:33.002 Cannot find device "nvmf_br" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:33.002 Cannot find device "nvmf_init_if" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:33.002 Cannot find device "nvmf_init_if2" 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:33.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:33.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:33.002 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:33.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:33.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:36:33.261 00:36:33.261 --- 10.0.0.3 ping statistics --- 00:36:33.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.261 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:33.261 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:33.261 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:36:33.261 00:36:33.261 --- 10.0.0.4 ping statistics --- 00:36:33.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.261 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:36:33.261 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:33.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:33.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:36:33.261 00:36:33.262 --- 10.0.0.1 ping statistics --- 00:36:33.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.262 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:33.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:33.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:36:33.262 00:36:33.262 --- 10.0.0.2 ping statistics --- 00:36:33.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.262 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=127791 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 127791 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 127791 ']' 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:33.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:33.262 17:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.262 [2024-11-20 17:57:15.010600] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:33.262 [2024-11-20 17:57:15.011941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:33.262 [2024-11-20 17:57:15.012025] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.521 [2024-11-20 17:57:15.157099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:33.521 [2024-11-20 17:57:15.266348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:33.521 [2024-11-20 17:57:15.266423] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:33.521 [2024-11-20 17:57:15.266445] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:33.521 [2024-11-20 17:57:15.266461] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:33.521 [2024-11-20 17:57:15.266473] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:33.521 [2024-11-20 17:57:15.266660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:33.521 [2024-11-20 17:57:15.267444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:33.521 [2024-11-20 17:57:15.267494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:33.521 [2024-11-20 17:57:15.267499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:33.781 [2024-11-20 17:57:15.373588] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:33.781 [2024-11-20 17:57:15.373849] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:33.781 [2024-11-20 17:57:15.374044] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:33.781 [2024-11-20 17:57:15.374086] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:33.781 [2024-11-20 17:57:15.374581] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.781 [2024-11-20 17:57:15.456845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.781 Malloc0 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.781 [2024-11-20 17:57:15.517271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:33.781 { 00:36:33.781 "params": { 00:36:33.781 "name": "Nvme$subsystem", 00:36:33.781 "trtype": "$TEST_TRANSPORT", 00:36:33.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.781 "adrfam": "ipv4", 00:36:33.781 "trsvcid": "$NVMF_PORT", 00:36:33.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.781 "hdgst": ${hdgst:-false}, 00:36:33.781 "ddgst": ${ddgst:-false} 00:36:33.781 }, 00:36:33.781 "method": "bdev_nvme_attach_controller" 00:36:33.781 } 00:36:33.781 EOF 00:36:33.781 )") 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:36:33.781 17:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:33.781 "params": { 00:36:33.781 "name": "Nvme1", 00:36:33.781 "trtype": "tcp", 00:36:33.781 "traddr": "10.0.0.3", 00:36:33.781 "adrfam": "ipv4", 00:36:33.781 "trsvcid": "4420", 00:36:33.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:33.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:33.781 "hdgst": false, 00:36:33.781 "ddgst": false 00:36:33.781 }, 00:36:33.781 "method": "bdev_nvme_attach_controller" 00:36:33.781 }' 00:36:34.040 [2024-11-20 17:57:15.598426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:34.040 [2024-11-20 17:57:15.598565] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127836 ] 00:36:34.040 [2024-11-20 17:57:15.747320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:34.299 [2024-11-20 17:57:15.848290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.299 [2024-11-20 17:57:15.848442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.299 [2024-11-20 17:57:15.848454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.299 I/O targets: 00:36:34.299 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:34.299 00:36:34.299 00:36:34.299 CUnit - A unit testing framework for C - Version 2.1-3 00:36:34.299 http://cunit.sourceforge.net/ 00:36:34.299 00:36:34.299 00:36:34.299 Suite: bdevio tests on: Nvme1n1 00:36:34.299 Test: blockdev write read block ...passed 00:36:34.557 Test: blockdev write zeroes read block ...passed 00:36:34.557 Test: blockdev write zeroes read no split ...passed 00:36:34.557 Test: blockdev write zeroes read split ...passed 00:36:34.557 Test: blockdev write zeroes read split partial ...passed 00:36:34.557 Test: blockdev reset ...[2024-11-20 17:57:16.183634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.557 [2024-11-20 17:57:16.183803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb58320 (9): Bad file descriptor 00:36:34.557 [2024-11-20 17:57:16.188329] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:34.557 passed 00:36:34.557 Test: blockdev write read 8 blocks ...passed 00:36:34.557 Test: blockdev write read size > 128k ...passed 00:36:34.557 Test: blockdev write read invalid size ...passed 00:36:34.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:34.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:34.557 Test: blockdev write read max offset ...passed 00:36:34.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:34.557 Test: blockdev writev readv 8 blocks ...passed 00:36:34.557 Test: blockdev writev readv 30 x 1block ...passed 00:36:34.816 Test: blockdev writev readv block ...passed 00:36:34.816 Test: blockdev writev readv size > 128k ...passed 00:36:34.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:34.816 Test: blockdev comparev and writev ...[2024-11-20 17:57:16.360349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.360412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.360434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.360446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.361077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.361106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.361124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.361134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.361606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.361635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.361667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.361677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.362153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.362182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.362200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:34.816 [2024-11-20 17:57:16.362211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:34.816 passed 00:36:34.816 Test: blockdev nvme passthru rw ...passed 00:36:34.816 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:57:16.445470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:34.816 [2024-11-20 17:57:16.445511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.445675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:34.816 [2024-11-20 17:57:16.445697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.445851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:34.816 [2024-11-20 17:57:16.445868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:34.816 [2024-11-20 17:57:16.446026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:34.816 [2024-11-20 17:57:16.446051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:34.816 passed 00:36:34.816 Test: blockdev nvme admin passthru ...passed 00:36:34.816 Test: blockdev copy ...passed 00:36:34.816 00:36:34.816 Run Summary: Type Total Ran Passed Failed Inactive 00:36:34.816 suites 1 1 n/a 0 0 00:36:34.816 tests 23 23 23 0 0 00:36:34.816 asserts 152 152 152 0 n/a 00:36:34.816 00:36:34.816 Elapsed time = 0.963 seconds 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:35.075 rmmod nvme_tcp 00:36:35.075 rmmod nvme_fabrics 00:36:35.075 rmmod nvme_keyring 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 127791 ']' 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 127791 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 127791 ']' 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 127791 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127791 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:36:35.075 killing process with pid 127791 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127791' 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 127791 00:36:35.075 17:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 127791 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:35.334 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:36:35.593 00:36:35.593 real 0m3.082s 00:36:35.593 user 0m7.846s 00:36:35.593 sys 0m1.156s 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:35.593 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:35.593 ************************************ 00:36:35.593 END TEST nvmf_bdevio 00:36:35.593 ************************************ 00:36:35.852 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:35.852 00:36:35.852 real 3m48.094s 00:36:35.852 user 10m5.857s 00:36:35.852 sys 1m21.983s 00:36:35.852 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:35.852 17:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:35.852 ************************************ 00:36:35.852 END TEST nvmf_target_core_interrupt_mode 00:36:35.852 ************************************ 00:36:35.852 17:57:17 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:35.852 17:57:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:35.852 17:57:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:35.852 17:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:35.852 ************************************ 00:36:35.852 START TEST nvmf_interrupt 00:36:35.852 ************************************ 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:35.852 * Looking for test storage... 00:36:35.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.852 --rc genhtml_branch_coverage=1 00:36:35.852 --rc genhtml_function_coverage=1 00:36:35.852 --rc genhtml_legend=1 00:36:35.852 --rc geninfo_all_blocks=1 00:36:35.852 --rc geninfo_unexecuted_blocks=1 00:36:35.852 00:36:35.852 ' 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.852 --rc genhtml_branch_coverage=1 00:36:35.852 --rc genhtml_function_coverage=1 00:36:35.852 --rc genhtml_legend=1 00:36:35.852 --rc geninfo_all_blocks=1 00:36:35.852 --rc geninfo_unexecuted_blocks=1 00:36:35.852 00:36:35.852 ' 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.852 --rc genhtml_branch_coverage=1 00:36:35.852 --rc genhtml_function_coverage=1 00:36:35.852 --rc genhtml_legend=1 00:36:35.852 --rc geninfo_all_blocks=1 00:36:35.852 --rc geninfo_unexecuted_blocks=1 00:36:35.852 00:36:35.852 ' 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.852 --rc genhtml_branch_coverage=1 00:36:35.852 --rc genhtml_function_coverage=1 00:36:35.852 --rc genhtml_legend=1 00:36:35.852 --rc geninfo_all_blocks=1 00:36:35.852 --rc geninfo_unexecuted_blocks=1 00:36:35.852 00:36:35.852 ' 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:35.852 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@456 -- # nvmf_veth_init 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:36.112 Cannot find device "nvmf_init_br" 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:36.112 Cannot find device "nvmf_init_br2" 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:36.112 Cannot find device "nvmf_tgt_br" 00:36:36.112 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:36.113 Cannot find device "nvmf_tgt_br2" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:36.113 Cannot find device "nvmf_init_br" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:36.113 Cannot find device "nvmf_init_br2" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:36.113 Cannot find device "nvmf_tgt_br" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:36.113 Cannot find device "nvmf_tgt_br2" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:36.113 Cannot find device "nvmf_br" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:36.113 Cannot find device "nvmf_init_if" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:36.113 Cannot find device "nvmf_init_if2" 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:36.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:36.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:36.113 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:36.372 17:57:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:36.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:36.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:36:36.372 00:36:36.372 --- 10.0.0.3 ping statistics --- 00:36:36.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.372 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:36.372 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:36.372 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:36:36.372 00:36:36.372 --- 10.0.0.4 ping statistics --- 00:36:36.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.372 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:36.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:36:36.372 00:36:36.372 --- 10.0.0.1 ping statistics --- 00:36:36.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.372 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:36.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:36:36.372 00:36:36.372 --- 10.0.0.2 ping statistics --- 00:36:36.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.372 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@457 -- # return 0 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=128084 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 128084 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 128084 ']' 00:36:36.372 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.373 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:36.373 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.373 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:36.373 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.631 [2024-11-20 17:57:18.162172] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:36.631 [2024-11-20 17:57:18.163681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:36.631 [2024-11-20 17:57:18.163890] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.631 [2024-11-20 17:57:18.308679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:36.631 [2024-11-20 17:57:18.395598] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.631 [2024-11-20 17:57:18.395851] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.631 [2024-11-20 17:57:18.396109] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.631 [2024-11-20 17:57:18.396372] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.631 [2024-11-20 17:57:18.396611] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.631 [2024-11-20 17:57:18.396969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.631 [2024-11-20 17:57:18.396992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.942 [2024-11-20 17:57:18.492164] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:36.942 [2024-11-20 17:57:18.492527] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:36.942 [2024-11-20 17:57:18.492811] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:36.942 5000+0 records in 00:36:36.942 5000+0 records out 00:36:36.942 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0290905 s, 352 MB/s 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.942 AIO0 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.942 [2024-11-20 17:57:18.654049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.942 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:37.201 [2024-11-20 17:57:18.698502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 128084 0 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128084 0 idle 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128084 root 20 0 64.2g 46592 33536 S 0.0 0.4 0:00.31 reactor_0' 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128084 root 20 0 64.2g 46592 33536 S 0.0 0.4 0:00.31 reactor_0 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 128084 1 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128084 1 idle 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:37.201 17:57:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128088 root 20 0 64.2g 46592 33536 S 0.0 0.4 0:00.00 reactor_1' 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128088 root 20 0 64.2g 46592 33536 S 0.0 0.4 0:00.00 reactor_1 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=128140 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 128084 0 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 128084 0 busy 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128084 root 20 0 64.2g 47872 33920 D 12.5 0.4 0:00.33 reactor_0' 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128084 root 20 0 64.2g 47872 33920 D 12.5 0.4 0:00.33 reactor_0 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=12.5 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=12 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:37.461 17:57:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128084 root 20 0 64.2g 47872 33920 R 99.9 0.4 0:01.80 reactor_0' 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128084 root 20 0 64.2g 47872 33920 R 99.9 0.4 0:01.80 reactor_0 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 128084 1 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 128084 1 busy 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128088 root 20 0 64.2g 47872 33920 R 66.7 0.4 0:00.85 reactor_1' 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128088 root 20 0 64.2g 47872 33920 R 66.7 0.4 0:00.85 reactor_1 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:38.835 17:57:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 128140 00:36:48.808 Initializing NVMe Controllers 00:36:48.808 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:48.808 Controller IO queue size 256, less than required. 00:36:48.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:48.808 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:48.808 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:48.808 Initialization complete. Launching workers. 00:36:48.808 ======================================================== 00:36:48.808 Latency(us) 00:36:48.808 Device Information : IOPS MiB/s Average min max 00:36:48.808 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 5726.79 22.37 44785.51 6398.56 91686.05 00:36:48.808 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6001.19 23.44 42725.19 7584.59 129963.44 00:36:48.808 ======================================================== 00:36:48.808 Total : 11727.98 45.81 43731.25 6398.56 129963.44 00:36:48.808 00:36:48.808 17:57:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:48.808 17:57:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 128084 0 00:36:48.808 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128084 0 idle 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128084 root 20 0 64.2g 47872 33920 S 6.7 0.4 0:14.18 reactor_0' 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128084 root 20 0 64.2g 47872 33920 S 6.7 0.4 0:14.18 reactor_0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 128084 1 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128084 1 idle 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128088 root 20 0 64.2g 47872 33920 S 0.0 0.4 0:06.92 reactor_1' 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128088 root 20 0 64.2g 47872 33920 S 0.0 0.4 0:06.92 reactor_1 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:48.809 17:57:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 128084 0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128084 0 idle 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128084 root 20 0 64.2g 49920 33920 S 0.0 0.4 0:14.24 reactor_0' 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128084 root 20 0 64.2g 49920 33920 S 0.0 0.4 0:14.24 reactor_0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 128084 1 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128084 1 idle 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128084 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128084 -w 256 00:36:50.187 17:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128088 root 20 0 64.2g 49920 33920 S 0.0 0.4 0:06.93 reactor_1' 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128088 root 20 0 64.2g 49920 33920 S 0.0 0.4 0:06.93 reactor_1 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:50.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:50.447 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:50.706 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.706 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:50.706 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.706 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.965 rmmod nvme_tcp 00:36:50.965 rmmod nvme_fabrics 00:36:50.965 rmmod nvme_keyring 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 128084 ']' 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 128084 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 128084 ']' 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 128084 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128084 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:50.965 killing process with pid 128084 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128084' 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 128084 00:36:50.965 17:57:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 128084 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:51.223 17:57:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:36:51.481 00:36:51.481 real 0m15.687s 00:36:51.481 user 0m29.025s 00:36:51.481 sys 0m7.035s 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:51.481 ************************************ 00:36:51.481 END TEST nvmf_interrupt 00:36:51.481 ************************************ 00:36:51.481 17:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:51.481 00:36:51.481 real 29m1.005s 00:36:51.481 user 84m30.439s 00:36:51.481 sys 6m13.864s 00:36:51.481 17:57:33 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:51.481 17:57:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.481 ************************************ 00:36:51.481 END TEST nvmf_tcp 00:36:51.481 ************************************ 00:36:51.481 17:57:33 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:36:51.481 17:57:33 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:51.481 17:57:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:51.481 17:57:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:51.481 17:57:33 -- common/autotest_common.sh@10 -- # set +x 00:36:51.481 ************************************ 00:36:51.481 START TEST spdkcli_nvmf_tcp 00:36:51.481 ************************************ 00:36:51.481 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:51.739 * Looking for test storage... 00:36:51.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.739 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.740 --rc genhtml_branch_coverage=1 00:36:51.740 --rc genhtml_function_coverage=1 00:36:51.740 --rc genhtml_legend=1 00:36:51.740 --rc geninfo_all_blocks=1 00:36:51.740 --rc geninfo_unexecuted_blocks=1 00:36:51.740 00:36:51.740 ' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.740 --rc genhtml_branch_coverage=1 00:36:51.740 --rc genhtml_function_coverage=1 00:36:51.740 --rc genhtml_legend=1 00:36:51.740 --rc geninfo_all_blocks=1 00:36:51.740 --rc geninfo_unexecuted_blocks=1 00:36:51.740 00:36:51.740 ' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.740 --rc genhtml_branch_coverage=1 00:36:51.740 --rc genhtml_function_coverage=1 00:36:51.740 --rc genhtml_legend=1 00:36:51.740 --rc geninfo_all_blocks=1 00:36:51.740 --rc geninfo_unexecuted_blocks=1 00:36:51.740 00:36:51.740 ' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.740 --rc genhtml_branch_coverage=1 00:36:51.740 --rc genhtml_function_coverage=1 00:36:51.740 --rc genhtml_legend=1 00:36:51.740 --rc geninfo_all_blocks=1 00:36:51.740 --rc geninfo_unexecuted_blocks=1 00:36:51.740 00:36:51.740 ' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:51.740 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=128467 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 128467 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 128467 ']' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:51.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:51.740 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.740 [2024-11-20 17:57:33.523036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:51.740 [2024-11-20 17:57:33.523171] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128467 ] 00:36:51.998 [2024-11-20 17:57:33.665210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:51.998 [2024-11-20 17:57:33.756430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.998 [2024-11-20 17:57:33.756448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:52.256 17:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:52.256 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:52.256 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:52.256 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:52.256 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:52.256 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:52.256 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:52.256 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:52.257 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:52.257 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:52.257 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:52.257 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:52.257 ' 00:36:55.540 [2024-11-20 17:57:36.728868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:56.474 [2024-11-20 17:57:38.062878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:59.003 [2024-11-20 17:57:40.526143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:00.975 [2024-11-20 17:57:42.663701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:02.877 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:02.877 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:02.877 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:02.877 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:02.877 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:02.877 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:02.877 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:02.877 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:02.877 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:02.877 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:02.877 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:02.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:02.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:02.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:02.878 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:02.878 17:57:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:37:03.446 17:57:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:03.446 17:57:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:03.446 17:57:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:03.446 17:57:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:03.446 17:57:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.446 17:57:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:03.446 17:57:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:03.446 17:57:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.446 17:57:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:03.446 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:03.446 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:03.446 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:03.446 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:03.446 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:03.446 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:03.446 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:03.446 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:03.446 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:03.446 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:03.446 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:03.446 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:03.446 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:03.446 ' 00:37:10.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:10.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:10.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:10.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:10.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:10.019 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:10.019 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:10.019 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:10.019 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:10.019 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:10.019 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:10.019 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:10.019 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:10.019 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 128467 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 128467 ']' 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 128467 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128467 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:10.019 killing process with pid 128467 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128467' 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 128467 00:37:10.019 17:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 128467 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 128467 ']' 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 128467 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 128467 ']' 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 128467 00:37:10.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (128467) - No such process 00:37:10.019 Process with pid 128467 is not found 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 128467 is not found' 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:10.019 ************************************ 00:37:10.019 END TEST spdkcli_nvmf_tcp 00:37:10.019 ************************************ 00:37:10.019 00:37:10.019 real 0m17.784s 00:37:10.019 user 0m38.702s 00:37:10.019 sys 0m0.890s 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:10.019 17:57:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.019 17:57:51 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:10.019 17:57:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:10.019 17:57:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:10.019 17:57:51 -- common/autotest_common.sh@10 -- # set +x 00:37:10.019 ************************************ 00:37:10.019 START TEST nvmf_identify_passthru 00:37:10.019 ************************************ 00:37:10.019 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:10.019 * Looking for test storage... 00:37:10.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:10.019 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:10.019 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:37:10.019 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:10.019 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.019 17:57:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:10.019 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.019 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:10.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.019 --rc genhtml_branch_coverage=1 00:37:10.019 --rc genhtml_function_coverage=1 00:37:10.019 --rc genhtml_legend=1 00:37:10.019 --rc geninfo_all_blocks=1 00:37:10.019 --rc geninfo_unexecuted_blocks=1 00:37:10.019 00:37:10.020 ' 00:37:10.020 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:10.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.020 --rc genhtml_branch_coverage=1 00:37:10.020 --rc genhtml_function_coverage=1 00:37:10.020 --rc genhtml_legend=1 00:37:10.020 --rc geninfo_all_blocks=1 00:37:10.020 --rc geninfo_unexecuted_blocks=1 00:37:10.020 00:37:10.020 ' 00:37:10.020 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:10.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.020 --rc genhtml_branch_coverage=1 00:37:10.020 --rc genhtml_function_coverage=1 00:37:10.020 --rc genhtml_legend=1 00:37:10.020 --rc geninfo_all_blocks=1 00:37:10.020 --rc geninfo_unexecuted_blocks=1 00:37:10.020 00:37:10.020 ' 00:37:10.020 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:10.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.020 --rc genhtml_branch_coverage=1 00:37:10.020 --rc genhtml_function_coverage=1 00:37:10.020 --rc genhtml_legend=1 00:37:10.020 --rc geninfo_all_blocks=1 00:37:10.020 --rc geninfo_unexecuted_blocks=1 00:37:10.020 00:37:10.020 ' 00:37:10.020 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:10.020 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:10.020 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:10.020 17:57:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.020 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.020 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:10.020 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@456 -- # nvmf_veth_init 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:10.020 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:10.020 Cannot find device "nvmf_init_br" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:10.021 Cannot find device "nvmf_init_br2" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:10.021 Cannot find device "nvmf_tgt_br" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:10.021 Cannot find device "nvmf_tgt_br2" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:10.021 Cannot find device "nvmf_init_br" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:10.021 Cannot find device "nvmf_init_br2" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:10.021 Cannot find device "nvmf_tgt_br" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:10.021 Cannot find device "nvmf_tgt_br2" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:10.021 Cannot find device "nvmf_br" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:10.021 Cannot find device "nvmf_init_if" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:10.021 Cannot find device "nvmf_init_if2" 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:10.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:10.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:10.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:10.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:37:10.021 00:37:10.021 --- 10.0.0.3 ping statistics --- 00:37:10.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.021 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:10.021 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:10.021 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:37:10.021 00:37:10.021 --- 10.0.0.4 ping statistics --- 00:37:10.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.021 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:10.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:10.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:37:10.021 00:37:10.021 --- 10.0.0.1 ping statistics --- 00:37:10.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.021 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:10.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:10.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:37:10.021 00:37:10.021 --- 10.0.0.2 ping statistics --- 00:37:10.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.021 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@457 -- # return 0 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:10.021 17:57:51 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:10.021 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.021 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:37:10.021 17:57:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:37:10.021 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:37:10.021 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:37:10.021 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:10.021 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:37:10.021 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:10.280 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:37:10.280 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:37:10.280 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:10.280 17:57:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:10.539 17:57:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:37:10.539 17:57:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.539 17:57:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.539 17:57:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=128963 00:37:10.539 17:57:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:10.539 17:57:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:10.539 17:57:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 128963 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 128963 ']' 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.539 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:10.540 17:57:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.540 [2024-11-20 17:57:52.250724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:10.540 [2024-11-20 17:57:52.251028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.798 [2024-11-20 17:57:52.392982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:10.798 [2024-11-20 17:57:52.505618] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.798 [2024-11-20 17:57:52.505679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.798 [2024-11-20 17:57:52.505694] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.798 [2024-11-20 17:57:52.505705] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.798 [2024-11-20 17:57:52.505714] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.798 [2024-11-20 17:57:52.505863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.798 [2024-11-20 17:57:52.506544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:10.798 [2024-11-20 17:57:52.506852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:10.798 [2024-11-20 17:57:52.506858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:37:11.734 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.734 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.734 [2024-11-20 17:57:53.430901] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.734 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.734 [2024-11-20 17:57:53.445208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.734 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:11.734 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:11.735 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.735 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:37:11.735 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.735 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.993 Nvme0n1 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.993 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.993 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.993 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.993 [2024-11-20 17:57:53.587973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.993 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.993 [ 00:37:11.993 { 00:37:11.993 "allow_any_host": true, 00:37:11.993 "hosts": [], 00:37:11.993 "listen_addresses": [], 00:37:11.993 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:11.993 "subtype": "Discovery" 00:37:11.993 }, 00:37:11.993 { 00:37:11.993 "allow_any_host": true, 00:37:11.993 "hosts": [], 00:37:11.993 "listen_addresses": [ 00:37:11.993 { 00:37:11.993 "adrfam": "IPv4", 00:37:11.993 "traddr": "10.0.0.3", 00:37:11.993 "trsvcid": "4420", 00:37:11.993 "trtype": "TCP" 00:37:11.993 } 00:37:11.993 ], 00:37:11.993 "max_cntlid": 65519, 00:37:11.993 "max_namespaces": 1, 00:37:11.993 "min_cntlid": 1, 00:37:11.993 "model_number": "SPDK bdev Controller", 00:37:11.993 "namespaces": [ 00:37:11.993 { 00:37:11.993 "bdev_name": "Nvme0n1", 00:37:11.993 "name": "Nvme0n1", 00:37:11.993 "nguid": "477229AB2AA44ADCA289592F68203C0F", 00:37:11.993 "nsid": 1, 00:37:11.993 "uuid": "477229ab-2aa4-4adc-a289-592f68203c0f" 00:37:11.993 } 00:37:11.993 ], 00:37:11.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.993 "serial_number": "SPDK00000000000001", 00:37:11.993 "subtype": "NVMe" 00:37:11.993 } 00:37:11.993 ] 00:37:11.993 17:57:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.993 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:11.993 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:11.993 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:12.252 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:37:12.252 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:12.252 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:12.252 17:57:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:12.510 17:57:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:37:12.510 17:57:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:37:12.510 17:57:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:37:12.510 17:57:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.510 17:57:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:12.510 17:57:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:12.510 rmmod nvme_tcp 00:37:12.510 rmmod nvme_fabrics 00:37:12.510 rmmod nvme_keyring 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 128963 ']' 00:37:12.510 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 128963 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 128963 ']' 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 128963 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128963 00:37:12.510 killing process with pid 128963 00:37:12.510 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:12.511 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:12.511 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128963' 00:37:12.511 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 128963 00:37:12.511 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 128963 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:12.769 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.027 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:13.027 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.027 17:57:54 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:37:13.027 00:37:13.027 real 0m3.667s 00:37:13.027 user 0m8.482s 00:37:13.027 sys 0m1.026s 00:37:13.027 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:13.027 17:57:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:13.027 ************************************ 00:37:13.027 END TEST nvmf_identify_passthru 00:37:13.027 ************************************ 00:37:13.027 17:57:54 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:37:13.028 17:57:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:13.028 17:57:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:13.028 17:57:54 -- common/autotest_common.sh@10 -- # set +x 00:37:13.028 ************************************ 00:37:13.028 START TEST nvmf_dif 00:37:13.028 ************************************ 00:37:13.028 17:57:54 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:37:13.286 * Looking for test storage... 00:37:13.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:13.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.287 --rc genhtml_branch_coverage=1 00:37:13.287 --rc genhtml_function_coverage=1 00:37:13.287 --rc genhtml_legend=1 00:37:13.287 --rc geninfo_all_blocks=1 00:37:13.287 --rc geninfo_unexecuted_blocks=1 00:37:13.287 00:37:13.287 ' 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:13.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.287 --rc genhtml_branch_coverage=1 00:37:13.287 --rc genhtml_function_coverage=1 00:37:13.287 --rc genhtml_legend=1 00:37:13.287 --rc geninfo_all_blocks=1 00:37:13.287 --rc geninfo_unexecuted_blocks=1 00:37:13.287 00:37:13.287 ' 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:13.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.287 --rc genhtml_branch_coverage=1 00:37:13.287 --rc genhtml_function_coverage=1 00:37:13.287 --rc genhtml_legend=1 00:37:13.287 --rc geninfo_all_blocks=1 00:37:13.287 --rc geninfo_unexecuted_blocks=1 00:37:13.287 00:37:13.287 ' 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:13.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.287 --rc genhtml_branch_coverage=1 00:37:13.287 --rc genhtml_function_coverage=1 00:37:13.287 --rc genhtml_legend=1 00:37:13.287 --rc geninfo_all_blocks=1 00:37:13.287 --rc geninfo_unexecuted_blocks=1 00:37:13.287 00:37:13.287 ' 00:37:13.287 17:57:54 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:13.287 17:57:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:13.287 17:57:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.287 17:57:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.287 17:57:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.287 17:57:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:13.287 17:57:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:13.287 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:13.287 17:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:13.287 17:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:13.287 17:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:13.287 17:57:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:13.287 17:57:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:13.287 17:57:54 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.287 17:57:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:13.288 17:57:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:13.288 17:57:54 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:13.288 Cannot find device "nvmf_init_br" 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:13.288 Cannot find device "nvmf_init_br2" 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:13.288 Cannot find device "nvmf_tgt_br" 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@164 -- # true 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:13.288 Cannot find device "nvmf_tgt_br2" 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@165 -- # true 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:13.288 Cannot find device "nvmf_init_br" 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@166 -- # true 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:13.288 Cannot find device "nvmf_init_br2" 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@167 -- # true 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:13.288 Cannot find device "nvmf_tgt_br" 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@168 -- # true 00:37:13.288 17:57:55 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:13.546 Cannot find device "nvmf_tgt_br2" 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@169 -- # true 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:13.546 Cannot find device "nvmf_br" 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@170 -- # true 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:13.546 Cannot find device "nvmf_init_if" 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@171 -- # true 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:13.546 Cannot find device "nvmf_init_if2" 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@172 -- # true 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:13.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@173 -- # true 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:13.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@174 -- # true 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:13.546 17:57:55 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:13.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:13.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:37:13.547 00:37:13.547 --- 10.0.0.3 ping statistics --- 00:37:13.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.547 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:37:13.547 17:57:55 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:13.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:13.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:37:13.805 00:37:13.805 --- 10.0.0.4 ping statistics --- 00:37:13.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.805 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:37:13.805 17:57:55 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:13.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:13.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:37:13.805 00:37:13.805 --- 10.0.0.1 ping statistics --- 00:37:13.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.805 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:37:13.805 17:57:55 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:13.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:13.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:37:13.805 00:37:13.805 --- 10.0.0.2 ping statistics --- 00:37:13.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.805 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:37:13.805 17:57:55 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:13.805 17:57:55 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:37:13.805 17:57:55 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:37:13.805 17:57:55 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:14.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:14.064 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:14.064 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:14.064 17:57:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:14.064 17:57:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=129371 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:14.064 17:57:55 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 129371 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 129371 ']' 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:14.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:14.064 17:57:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:14.064 [2024-11-20 17:57:55.822364] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:14.064 [2024-11-20 17:57:55.822458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.323 [2024-11-20 17:57:55.963266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.323 [2024-11-20 17:57:56.066658] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.323 [2024-11-20 17:57:56.066734] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.323 [2024-11-20 17:57:56.066758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.323 [2024-11-20 17:57:56.066775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.323 [2024-11-20 17:57:56.066809] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.323 [2024-11-20 17:57:56.066864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.256 17:57:56 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:15.256 17:57:56 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:37:15.256 17:57:56 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:15.256 17:57:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:15.256 17:57:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:15.256 17:57:56 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.256 17:57:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:15.256 17:57:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:15.256 17:57:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.256 17:57:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:15.256 [2024-11-20 17:57:57.002879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.256 17:57:57 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.256 17:57:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:15.256 17:57:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:15.256 17:57:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:15.256 17:57:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:15.256 ************************************ 00:37:15.256 START TEST fio_dif_1_default 00:37:15.256 ************************************ 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:15.256 bdev_null0 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.256 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:15.513 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:15.514 [2024-11-20 17:57:57.047124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:15.514 { 00:37:15.514 "params": { 00:37:15.514 "name": "Nvme$subsystem", 00:37:15.514 "trtype": "$TEST_TRANSPORT", 00:37:15.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.514 "adrfam": "ipv4", 00:37:15.514 "trsvcid": "$NVMF_PORT", 00:37:15.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.514 "hdgst": ${hdgst:-false}, 00:37:15.514 "ddgst": ${ddgst:-false} 00:37:15.514 }, 00:37:15.514 "method": "bdev_nvme_attach_controller" 00:37:15.514 } 00:37:15.514 EOF 00:37:15.514 )") 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:15.514 "params": { 00:37:15.514 "name": "Nvme0", 00:37:15.514 "trtype": "tcp", 00:37:15.514 "traddr": "10.0.0.3", 00:37:15.514 "adrfam": "ipv4", 00:37:15.514 "trsvcid": "4420", 00:37:15.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.514 "hdgst": false, 00:37:15.514 "ddgst": false 00:37:15.514 }, 00:37:15.514 "method": "bdev_nvme_attach_controller" 00:37:15.514 }' 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:15.514 17:57:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.514 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:15.514 fio-3.35 00:37:15.514 Starting 1 thread 00:37:27.738 00:37:27.738 filename0: (groupid=0, jobs=1): err= 0: pid=129457: Wed Nov 20 17:58:07 2024 00:37:27.738 read: IOPS=876, BW=3507KiB/s (3591kB/s)(34.3MiB/10011msec) 00:37:27.738 slat (nsec): min=4695, max=61501, avg=8934.11, stdev=3856.86 00:37:27.738 clat (usec): min=422, max=42585, avg=4535.76, stdev=12112.87 00:37:27.738 lat (usec): min=428, max=42595, avg=4544.69, stdev=12112.90 00:37:27.738 clat percentiles (usec): 00:37:27.738 | 1.00th=[ 449], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 498], 00:37:27.738 | 30.00th=[ 506], 40.00th=[ 515], 50.00th=[ 523], 60.00th=[ 529], 00:37:27.738 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 685], 95.00th=[41157], 00:37:27.738 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:37:27.738 | 99.99th=[42730] 00:37:27.738 bw ( KiB/s): min= 832, max=12608, per=100.00%, avg=3508.80, stdev=3414.63, samples=20 00:37:27.738 iops : min= 208, max= 3152, avg=877.20, stdev=853.66, samples=20 00:37:27.738 lat (usec) : 500=24.16%, 750=65.91% 00:37:27.738 lat (msec) : 10=0.05%, 50=9.89% 00:37:27.738 cpu : usr=90.87%, sys=8.29%, ctx=39, majf=0, minf=0 00:37:27.738 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.738 issued rwts: total=8776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.738 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:27.738 00:37:27.738 Run status group 0 (all jobs): 00:37:27.738 READ: bw=3507KiB/s (3591kB/s), 3507KiB/s-3507KiB/s (3591kB/s-3591kB/s), io=34.3MiB (35.9MB), run=10011-10011msec 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 ************************************ 00:37:27.738 END TEST fio_dif_1_default 00:37:27.738 ************************************ 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.738 00:37:27.738 real 0m11.057s 00:37:27.738 user 0m9.749s 00:37:27.738 sys 0m1.125s 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 17:58:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:27.738 17:58:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:27.738 17:58:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 ************************************ 00:37:27.738 START TEST fio_dif_1_multi_subsystems 00:37:27.738 ************************************ 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 bdev_null0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.738 [2024-11-20 17:58:08.160847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:27.738 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.739 bdev_null1 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:27.739 { 00:37:27.739 "params": { 00:37:27.739 "name": "Nvme$subsystem", 00:37:27.739 "trtype": "$TEST_TRANSPORT", 00:37:27.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.739 "adrfam": "ipv4", 00:37:27.739 "trsvcid": "$NVMF_PORT", 00:37:27.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.739 "hdgst": ${hdgst:-false}, 00:37:27.739 "ddgst": ${ddgst:-false} 00:37:27.739 }, 00:37:27.739 "method": "bdev_nvme_attach_controller" 00:37:27.739 } 00:37:27.739 EOF 00:37:27.739 )") 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:27.739 { 00:37:27.739 "params": { 00:37:27.739 "name": "Nvme$subsystem", 00:37:27.739 "trtype": "$TEST_TRANSPORT", 00:37:27.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.739 "adrfam": "ipv4", 00:37:27.739 "trsvcid": "$NVMF_PORT", 00:37:27.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.739 "hdgst": ${hdgst:-false}, 00:37:27.739 "ddgst": ${ddgst:-false} 00:37:27.739 }, 00:37:27.739 "method": "bdev_nvme_attach_controller" 00:37:27.739 } 00:37:27.739 EOF 00:37:27.739 )") 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:27.739 "params": { 00:37:27.739 "name": "Nvme0", 00:37:27.739 "trtype": "tcp", 00:37:27.739 "traddr": "10.0.0.3", 00:37:27.739 "adrfam": "ipv4", 00:37:27.739 "trsvcid": "4420", 00:37:27.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.739 "hdgst": false, 00:37:27.739 "ddgst": false 00:37:27.739 }, 00:37:27.739 "method": "bdev_nvme_attach_controller" 00:37:27.739 },{ 00:37:27.739 "params": { 00:37:27.739 "name": "Nvme1", 00:37:27.739 "trtype": "tcp", 00:37:27.739 "traddr": "10.0.0.3", 00:37:27.739 "adrfam": "ipv4", 00:37:27.739 "trsvcid": "4420", 00:37:27.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:27.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:27.739 "hdgst": false, 00:37:27.739 "ddgst": false 00:37:27.739 }, 00:37:27.739 "method": "bdev_nvme_attach_controller" 00:37:27.739 }' 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:27.739 17:58:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.739 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:27.739 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:27.739 fio-3.35 00:37:27.739 Starting 2 threads 00:37:37.707 00:37:37.707 filename0: (groupid=0, jobs=1): err= 0: pid=129611: Wed Nov 20 17:58:19 2024 00:37:37.707 read: IOPS=179, BW=716KiB/s (733kB/s)(7168KiB/10010msec) 00:37:37.707 slat (nsec): min=7526, max=82770, avg=12817.52, stdev=9589.39 00:37:37.707 clat (usec): min=450, max=42657, avg=22299.49, stdev=20235.04 00:37:37.707 lat (usec): min=458, max=42671, avg=22312.31, stdev=20234.81 00:37:37.707 clat percentiles (usec): 00:37:37.707 | 1.00th=[ 465], 5.00th=[ 482], 10.00th=[ 494], 20.00th=[ 515], 00:37:37.707 | 30.00th=[ 545], 40.00th=[ 652], 50.00th=[40633], 60.00th=[41157], 00:37:37.707 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:37.707 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:37.707 | 99.99th=[42730] 00:37:37.707 bw ( KiB/s): min= 544, max= 992, per=50.45%, avg=715.20, stdev=112.43, samples=20 00:37:37.707 iops : min= 136, max= 248, avg=178.80, stdev=28.11, samples=20 00:37:37.707 lat (usec) : 500=13.78%, 750=28.63%, 1000=2.46% 00:37:37.707 lat (msec) : 2=1.34%, 4=0.22%, 50=53.57% 00:37:37.707 cpu : usr=96.98%, sys=2.45%, ctx=13, majf=0, minf=0 00:37:37.707 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.707 issued rwts: total=1792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.707 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:37.707 filename1: (groupid=0, jobs=1): err= 0: pid=129612: Wed Nov 20 17:58:19 2024 00:37:37.707 read: IOPS=175, BW=703KiB/s (720kB/s)(7056KiB/10036msec) 00:37:37.707 slat (nsec): min=6300, max=88082, avg=12266.77, stdev=9167.51 00:37:37.707 clat (usec): min=442, max=42884, avg=22716.40, stdev=20172.11 00:37:37.707 lat (usec): min=450, max=42924, avg=22728.66, stdev=20171.97 00:37:37.707 clat percentiles (usec): 00:37:37.707 | 1.00th=[ 465], 5.00th=[ 490], 10.00th=[ 502], 20.00th=[ 523], 00:37:37.707 | 30.00th=[ 553], 40.00th=[ 685], 50.00th=[40633], 60.00th=[41157], 00:37:37.707 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:37.708 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:37:37.708 | 99.99th=[42730] 00:37:37.708 bw ( KiB/s): min= 512, max= 1024, per=49.60%, avg=703.90, stdev=139.32, samples=20 00:37:37.708 iops : min= 128, max= 256, avg=175.95, stdev=34.84, samples=20 00:37:37.708 lat (usec) : 500=9.41%, 750=31.58%, 1000=2.38% 00:37:37.708 lat (msec) : 2=1.98%, 50=54.65% 00:37:37.708 cpu : usr=96.68%, sys=2.86%, ctx=12, majf=0, minf=0 00:37:37.708 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.708 issued rwts: total=1764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.708 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:37.708 00:37:37.708 Run status group 0 (all jobs): 00:37:37.708 READ: bw=1417KiB/s (1451kB/s), 703KiB/s-716KiB/s (720kB/s-733kB/s), io=13.9MiB (14.6MB), run=10010-10036msec 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 ************************************ 00:37:37.708 END TEST fio_dif_1_multi_subsystems 00:37:37.708 ************************************ 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 00:37:37.708 real 0m11.359s 00:37:37.708 user 0m20.324s 00:37:37.708 sys 0m0.857s 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:37.708 17:58:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.966 17:58:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:37.966 17:58:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:37.966 17:58:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:37.966 17:58:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:37.966 ************************************ 00:37:37.966 START TEST fio_dif_rand_params 00:37:37.966 ************************************ 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.966 bdev_null0 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.966 [2024-11-20 17:58:19.580518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:37.966 { 00:37:37.966 "params": { 00:37:37.966 "name": "Nvme$subsystem", 00:37:37.966 "trtype": "$TEST_TRANSPORT", 00:37:37.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.966 "adrfam": "ipv4", 00:37:37.966 "trsvcid": "$NVMF_PORT", 00:37:37.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.966 "hdgst": ${hdgst:-false}, 00:37:37.966 "ddgst": ${ddgst:-false} 00:37:37.966 }, 00:37:37.966 "method": "bdev_nvme_attach_controller" 00:37:37.966 } 00:37:37.966 EOF 00:37:37.966 )") 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:37.966 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:37.967 "params": { 00:37:37.967 "name": "Nvme0", 00:37:37.967 "trtype": "tcp", 00:37:37.967 "traddr": "10.0.0.3", 00:37:37.967 "adrfam": "ipv4", 00:37:37.967 "trsvcid": "4420", 00:37:37.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:37.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:37.967 "hdgst": false, 00:37:37.967 "ddgst": false 00:37:37.967 }, 00:37:37.967 "method": "bdev_nvme_attach_controller" 00:37:37.967 }' 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:37.967 17:58:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:38.225 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:38.226 ... 00:37:38.226 fio-3.35 00:37:38.226 Starting 3 threads 00:37:44.798 00:37:44.798 filename0: (groupid=0, jobs=1): err= 0: pid=129762: Wed Nov 20 17:58:25 2024 00:37:44.798 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(148MiB/5006msec) 00:37:44.798 slat (nsec): min=7369, max=69336, avg=13999.09, stdev=5835.21 00:37:44.798 clat (usec): min=4170, max=55358, avg=12651.01, stdev=10315.76 00:37:44.798 lat (usec): min=4182, max=55370, avg=12665.01, stdev=10315.67 00:37:44.798 clat percentiles (usec): 00:37:44.798 | 1.00th=[ 4490], 5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8356], 00:37:44.798 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10552], 60.00th=[10814], 00:37:44.798 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12387], 95.00th=[50070], 00:37:44.798 | 99.00th=[52167], 99.50th=[52691], 99.90th=[54264], 99.95th=[55313], 00:37:44.798 | 99.99th=[55313] 00:37:44.798 bw ( KiB/s): min=21504, max=38912, per=33.90%, avg=30284.80, stdev=6854.40, samples=10 00:37:44.798 iops : min= 168, max= 304, avg=236.60, stdev=53.55, samples=10 00:37:44.798 lat (msec) : 10=38.23%, 20=55.19%, 50=1.27%, 100=5.32% 00:37:44.798 cpu : usr=93.07%, sys=5.31%, ctx=34, majf=0, minf=9 00:37:44.798 IO depths : 1=3.5%, 2=96.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.798 issued rwts: total=1185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:44.798 filename0: (groupid=0, jobs=1): err= 0: pid=129763: Wed Nov 20 17:58:25 2024 00:37:44.798 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5003msec) 00:37:44.798 slat (nsec): min=7628, max=51345, avg=12011.67, stdev=5364.51 00:37:44.798 clat (usec): min=3999, max=52681, avg=11724.65, stdev=6007.65 00:37:44.798 lat (usec): min=4012, max=52694, avg=11736.66, stdev=6008.13 00:37:44.798 clat percentiles (usec): 00:37:44.798 | 1.00th=[ 4228], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 8291], 00:37:44.798 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[12649], 60.00th=[14091], 00:37:44.798 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15664], 95.00th=[16319], 00:37:44.798 | 99.00th=[47449], 99.50th=[49021], 99.90th=[51643], 99.95th=[52691], 00:37:44.798 | 99.99th=[52691] 00:37:44.798 bw ( KiB/s): min=25088, max=44544, per=37.25%, avg=33280.00, stdev=7030.68, samples=9 00:37:44.798 iops : min= 196, max= 348, avg=260.00, stdev=54.93, samples=9 00:37:44.798 lat (msec) : 4=0.08%, 10=41.90%, 20=56.62%, 50=1.10%, 100=0.31% 00:37:44.798 cpu : usr=92.76%, sys=5.72%, ctx=11, majf=0, minf=9 00:37:44.798 IO depths : 1=24.2%, 2=75.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.798 issued rwts: total=1277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:44.798 filename0: (groupid=0, jobs=1): err= 0: pid=129764: Wed Nov 20 17:58:25 2024 00:37:44.798 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(129MiB/5002msec) 00:37:44.798 slat (usec): min=6, max=347, avg=14.79, stdev=12.24 00:37:44.798 clat (usec): min=3991, max=55919, avg=14515.42, stdev=10721.84 00:37:44.798 lat (usec): min=3999, max=55933, avg=14530.22, stdev=10721.94 00:37:44.798 clat percentiles (usec): 00:37:44.798 | 1.00th=[ 4359], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[ 8979], 00:37:44.798 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:37:44.798 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14877], 95.00th=[49546], 00:37:44.798 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:37:44.798 | 99.99th=[55837] 00:37:44.798 bw ( KiB/s): min=19456, max=32256, per=29.32%, avg=26197.33, stdev=3965.93, samples=9 00:37:44.798 iops : min= 152, max= 252, avg=204.67, stdev=30.98, samples=9 00:37:44.798 lat (msec) : 4=0.19%, 10=23.06%, 20=69.19%, 50=3.29%, 100=4.26% 00:37:44.798 cpu : usr=93.56%, sys=4.68%, ctx=108, majf=0, minf=9 00:37:44.798 IO depths : 1=7.4%, 2=92.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.798 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:44.798 00:37:44.798 Run status group 0 (all jobs): 00:37:44.798 READ: bw=87.2MiB/s (91.5MB/s), 25.8MiB/s-31.9MiB/s (27.0MB/s-33.5MB/s), io=437MiB (458MB), run=5002-5006msec 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.798 bdev_null0 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.798 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 [2024-11-20 17:58:25.745478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 bdev_null1 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 bdev_null2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:44.799 { 00:37:44.799 "params": { 00:37:44.799 "name": "Nvme$subsystem", 00:37:44.799 "trtype": "$TEST_TRANSPORT", 00:37:44.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.799 "adrfam": "ipv4", 00:37:44.799 "trsvcid": "$NVMF_PORT", 00:37:44.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.799 "hdgst": ${hdgst:-false}, 00:37:44.799 "ddgst": ${ddgst:-false} 00:37:44.799 }, 00:37:44.799 "method": "bdev_nvme_attach_controller" 00:37:44.799 } 00:37:44.799 EOF 00:37:44.799 )") 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:44.799 { 00:37:44.799 "params": { 00:37:44.799 "name": "Nvme$subsystem", 00:37:44.799 "trtype": "$TEST_TRANSPORT", 00:37:44.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.799 "adrfam": "ipv4", 00:37:44.799 "trsvcid": "$NVMF_PORT", 00:37:44.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.799 "hdgst": ${hdgst:-false}, 00:37:44.799 "ddgst": ${ddgst:-false} 00:37:44.799 }, 00:37:44.799 "method": "bdev_nvme_attach_controller" 00:37:44.799 } 00:37:44.799 EOF 00:37:44.799 )") 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:44.799 { 00:37:44.799 "params": { 00:37:44.799 "name": "Nvme$subsystem", 00:37:44.799 "trtype": "$TEST_TRANSPORT", 00:37:44.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.799 "adrfam": "ipv4", 00:37:44.799 "trsvcid": "$NVMF_PORT", 00:37:44.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.799 "hdgst": ${hdgst:-false}, 00:37:44.799 "ddgst": ${ddgst:-false} 00:37:44.799 }, 00:37:44.799 "method": "bdev_nvme_attach_controller" 00:37:44.799 } 00:37:44.799 EOF 00:37:44.799 )") 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:37:44.799 17:58:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:44.799 "params": { 00:37:44.799 "name": "Nvme0", 00:37:44.799 "trtype": "tcp", 00:37:44.799 "traddr": "10.0.0.3", 00:37:44.799 "adrfam": "ipv4", 00:37:44.799 "trsvcid": "4420", 00:37:44.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.799 "hdgst": false, 00:37:44.799 "ddgst": false 00:37:44.799 }, 00:37:44.799 "method": "bdev_nvme_attach_controller" 00:37:44.799 },{ 00:37:44.799 "params": { 00:37:44.799 "name": "Nvme1", 00:37:44.799 "trtype": "tcp", 00:37:44.799 "traddr": "10.0.0.3", 00:37:44.799 "adrfam": "ipv4", 00:37:44.799 "trsvcid": "4420", 00:37:44.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:44.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:44.800 "hdgst": false, 00:37:44.800 "ddgst": false 00:37:44.800 }, 00:37:44.800 "method": "bdev_nvme_attach_controller" 00:37:44.800 },{ 00:37:44.800 "params": { 00:37:44.800 "name": "Nvme2", 00:37:44.800 "trtype": "tcp", 00:37:44.800 "traddr": "10.0.0.3", 00:37:44.800 "adrfam": "ipv4", 00:37:44.800 "trsvcid": "4420", 00:37:44.800 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:44.800 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:44.800 "hdgst": false, 00:37:44.800 "ddgst": false 00:37:44.800 }, 00:37:44.800 "method": "bdev_nvme_attach_controller" 00:37:44.800 }' 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:44.800 17:58:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.800 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:44.800 ... 00:37:44.800 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:44.800 ... 00:37:44.800 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:44.800 ... 00:37:44.800 fio-3.35 00:37:44.800 Starting 24 threads 00:37:57.011 00:37:57.011 filename0: (groupid=0, jobs=1): err= 0: pid=129861: Wed Nov 20 17:58:37 2024 00:37:57.011 read: IOPS=209, BW=838KiB/s (858kB/s)(8440KiB/10069msec) 00:37:57.011 slat (usec): min=3, max=4056, avg=17.21, stdev=124.35 00:37:57.011 clat (usec): min=1745, max=158838, avg=76159.71, stdev=28670.59 00:37:57.011 lat (usec): min=1753, max=158849, avg=76176.92, stdev=28672.24 00:37:57.011 clat percentiles (usec): 00:37:57.011 | 1.00th=[ 1844], 5.00th=[ 35390], 10.00th=[ 46400], 20.00th=[ 55837], 00:37:57.011 | 30.00th=[ 60031], 40.00th=[ 70779], 50.00th=[ 72877], 60.00th=[ 81265], 00:37:57.011 | 70.00th=[ 88605], 80.00th=[102237], 90.00th=[114820], 95.00th=[124257], 00:37:57.011 | 99.00th=[139461], 99.50th=[149947], 99.90th=[158335], 99.95th=[158335], 00:37:57.011 | 99.99th=[158335] 00:37:57.011 bw ( KiB/s): min= 560, max= 1408, per=4.72%, avg=837.40, stdev=198.55, samples=20 00:37:57.011 iops : min= 140, max= 352, avg=209.35, stdev=49.64, samples=20 00:37:57.011 lat (msec) : 2=2.18%, 4=0.09%, 10=1.52%, 50=13.13%, 100=62.23% 00:37:57.011 lat (msec) : 250=20.85% 00:37:57.011 cpu : usr=35.44%, sys=0.65%, ctx=1153, majf=0, minf=9 00:37:57.011 IO depths : 1=1.3%, 2=2.8%, 4=10.0%, 8=73.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:57.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.011 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.011 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.011 filename0: (groupid=0, jobs=1): err= 0: pid=129862: Wed Nov 20 17:58:37 2024 00:37:57.011 read: IOPS=206, BW=827KiB/s (847kB/s)(8308KiB/10049msec) 00:37:57.011 slat (nsec): min=5055, max=81339, avg=13000.78, stdev=7998.57 00:37:57.011 clat (msec): min=27, max=171, avg=77.14, stdev=25.09 00:37:57.011 lat (msec): min=27, max=171, avg=77.15, stdev=25.09 00:37:57.011 clat percentiles (msec): 00:37:57.011 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 56], 00:37:57.011 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:37:57.011 | 70.00th=[ 89], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 121], 00:37:57.011 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 171], 99.95th=[ 171], 00:37:57.011 | 99.99th=[ 171] 00:37:57.012 bw ( KiB/s): min= 507, max= 1088, per=4.66%, avg=826.55, stdev=163.69, samples=20 00:37:57.012 iops : min= 126, max= 272, avg=206.60, stdev=41.00, samples=20 00:37:57.012 lat (msec) : 50=17.53%, 100=61.77%, 250=20.70% 00:37:57.012 cpu : usr=36.14%, sys=0.59%, ctx=1026, majf=0, minf=9 00:37:57.012 IO depths : 1=1.3%, 2=2.8%, 4=9.3%, 8=74.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename0: (groupid=0, jobs=1): err= 0: pid=129863: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=170, BW=683KiB/s (699kB/s)(6844KiB/10024msec) 00:37:57.012 slat (usec): min=5, max=8050, avg=28.46, stdev=335.93 00:37:57.012 clat (msec): min=20, max=226, avg=93.55, stdev=28.15 00:37:57.012 lat (msec): min=26, max=226, avg=93.57, stdev=28.13 00:37:57.012 clat percentiles (msec): 00:37:57.012 | 1.00th=[ 36], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 71], 00:37:57.012 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 100], 00:37:57.012 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 144], 00:37:57.012 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 226], 99.95th=[ 226], 00:37:57.012 | 99.99th=[ 226] 00:37:57.012 bw ( KiB/s): min= 496, max= 887, per=3.82%, avg=677.35, stdev=126.15, samples=20 00:37:57.012 iops : min= 124, max= 221, avg=169.30, stdev=31.47, samples=20 00:37:57.012 lat (msec) : 50=3.10%, 100=56.93%, 250=39.98% 00:37:57.012 cpu : usr=38.33%, sys=0.80%, ctx=1055, majf=0, minf=9 00:37:57.012 IO depths : 1=3.2%, 2=6.9%, 4=17.1%, 8=63.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename0: (groupid=0, jobs=1): err= 0: pid=129864: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=212, BW=852KiB/s (872kB/s)(8552KiB/10039msec) 00:37:57.012 slat (usec): min=7, max=7321, avg=16.66, stdev=172.02 00:37:57.012 clat (msec): min=28, max=149, avg=75.02, stdev=22.18 00:37:57.012 lat (msec): min=28, max=149, avg=75.04, stdev=22.19 00:37:57.012 clat percentiles (msec): 00:37:57.012 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:37:57.012 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 80], 00:37:57.012 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 112], 00:37:57.012 | 99.00th=[ 128], 99.50th=[ 134], 99.90th=[ 150], 99.95th=[ 150], 00:37:57.012 | 99.99th=[ 150] 00:37:57.012 bw ( KiB/s): min= 640, max= 1152, per=4.78%, avg=848.55, stdev=149.40, samples=20 00:37:57.012 iops : min= 160, max= 288, avg=212.10, stdev=37.37, samples=20 00:37:57.012 lat (msec) : 50=17.07%, 100=66.42%, 250=16.51% 00:37:57.012 cpu : usr=41.50%, sys=0.80%, ctx=1406, majf=0, minf=9 00:37:57.012 IO depths : 1=1.3%, 2=2.8%, 4=9.5%, 8=74.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename0: (groupid=0, jobs=1): err= 0: pid=129865: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=208, BW=834KiB/s (854kB/s)(8380KiB/10053msec) 00:37:57.012 slat (nsec): min=4746, max=44499, avg=12477.52, stdev=7042.12 00:37:57.012 clat (msec): min=32, max=147, avg=76.63, stdev=23.37 00:37:57.012 lat (msec): min=32, max=147, avg=76.64, stdev=23.37 00:37:57.012 clat percentiles (msec): 00:37:57.012 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:37:57.012 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 82], 00:37:57.012 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:37:57.012 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:37:57.012 | 99.99th=[ 148] 00:37:57.012 bw ( KiB/s): min= 552, max= 1120, per=4.69%, avg=831.50, stdev=144.12, samples=20 00:37:57.012 iops : min= 138, max= 280, avg=207.85, stdev=36.06, samples=20 00:37:57.012 lat (msec) : 50=14.99%, 100=69.64%, 250=15.37% 00:37:57.012 cpu : usr=35.60%, sys=0.76%, ctx=912, majf=0, minf=9 00:37:57.012 IO depths : 1=0.4%, 2=0.9%, 4=6.6%, 8=78.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=89.0%, 8=6.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename0: (groupid=0, jobs=1): err= 0: pid=129866: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=168, BW=674KiB/s (691kB/s)(6744KiB/10001msec) 00:37:57.012 slat (usec): min=3, max=8056, avg=24.16, stdev=236.13 00:37:57.012 clat (msec): min=14, max=248, avg=94.73, stdev=29.47 00:37:57.012 lat (msec): min=14, max=248, avg=94.75, stdev=29.47 00:37:57.012 clat percentiles (msec): 00:37:57.012 | 1.00th=[ 42], 5.00th=[ 59], 10.00th=[ 66], 20.00th=[ 71], 00:37:57.012 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 91], 60.00th=[ 104], 00:37:57.012 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 132], 95.00th=[ 142], 00:37:57.012 | 99.00th=[ 167], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 249], 00:37:57.012 | 99.99th=[ 249] 00:37:57.012 bw ( KiB/s): min= 256, max= 896, per=3.73%, avg=662.74, stdev=159.56, samples=19 00:37:57.012 iops : min= 64, max= 224, avg=165.68, stdev=39.89, samples=19 00:37:57.012 lat (msec) : 20=0.12%, 50=2.67%, 100=52.91%, 250=44.31% 00:37:57.012 cpu : usr=44.25%, sys=0.81%, ctx=1290, majf=0, minf=9 00:37:57.012 IO depths : 1=3.6%, 2=7.9%, 4=19.1%, 8=60.3%, 16=9.1%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=92.6%, 8=1.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=1686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename0: (groupid=0, jobs=1): err= 0: pid=129867: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=169, BW=678KiB/s (694kB/s)(6784KiB/10007msec) 00:37:57.012 slat (usec): min=4, max=4039, avg=16.33, stdev=98.10 00:37:57.012 clat (msec): min=13, max=205, avg=94.27, stdev=28.65 00:37:57.012 lat (msec): min=13, max=205, avg=94.28, stdev=28.65 00:37:57.012 clat percentiles (msec): 00:37:57.012 | 1.00th=[ 37], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 72], 00:37:57.012 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 104], 00:37:57.012 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 144], 00:37:57.012 | 99.00th=[ 178], 99.50th=[ 207], 99.90th=[ 207], 99.95th=[ 207], 00:37:57.012 | 99.99th=[ 207] 00:37:57.012 bw ( KiB/s): min= 384, max= 880, per=3.76%, avg=666.95, stdev=134.82, samples=19 00:37:57.012 iops : min= 96, max= 220, avg=166.74, stdev=33.71, samples=19 00:37:57.012 lat (msec) : 20=0.12%, 50=4.07%, 100=52.06%, 250=43.75% 00:37:57.012 cpu : usr=39.94%, sys=0.81%, ctx=1066, majf=0, minf=9 00:37:57.012 IO depths : 1=3.4%, 2=8.0%, 4=20.3%, 8=59.1%, 16=9.1%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename0: (groupid=0, jobs=1): err= 0: pid=129868: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=200, BW=800KiB/s (819kB/s)(8032KiB/10040msec) 00:37:57.012 slat (usec): min=4, max=7032, avg=22.47, stdev=220.35 00:37:57.012 clat (msec): min=25, max=245, avg=79.85, stdev=26.91 00:37:57.012 lat (msec): min=25, max=245, avg=79.87, stdev=26.91 00:37:57.012 clat percentiles (msec): 00:37:57.012 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 59], 00:37:57.012 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:37:57.012 | 70.00th=[ 91], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 128], 00:37:57.012 | 99.00th=[ 144], 99.50th=[ 209], 99.90th=[ 245], 99.95th=[ 245], 00:37:57.012 | 99.99th=[ 245] 00:37:57.012 bw ( KiB/s): min= 384, max= 1040, per=4.49%, avg=796.80, stdev=196.62, samples=20 00:37:57.012 iops : min= 96, max= 260, avg=199.20, stdev=49.15, samples=20 00:37:57.012 lat (msec) : 50=10.46%, 100=67.83%, 250=21.71% 00:37:57.012 cpu : usr=42.53%, sys=0.79%, ctx=1235, majf=0, minf=9 00:37:57.012 IO depths : 1=1.4%, 2=3.1%, 4=10.6%, 8=72.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename1: (groupid=0, jobs=1): err= 0: pid=129869: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=161, BW=646KiB/s (662kB/s)(6464KiB/10005msec) 00:37:57.012 slat (usec): min=3, max=4024, avg=20.68, stdev=160.12 00:37:57.012 clat (msec): min=10, max=318, avg=98.91, stdev=36.68 00:37:57.012 lat (msec): min=10, max=318, avg=98.93, stdev=36.68 00:37:57.012 clat percentiles (msec): 00:37:57.012 | 1.00th=[ 41], 5.00th=[ 59], 10.00th=[ 66], 20.00th=[ 71], 00:37:57.012 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 105], 00:37:57.012 | 70.00th=[ 112], 80.00th=[ 122], 90.00th=[ 146], 95.00th=[ 153], 00:37:57.012 | 99.00th=[ 184], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 321], 00:37:57.012 | 99.99th=[ 321] 00:37:57.012 bw ( KiB/s): min= 336, max= 896, per=3.53%, avg=626.53, stdev=152.05, samples=19 00:37:57.012 iops : min= 84, max= 224, avg=156.63, stdev=38.01, samples=19 00:37:57.012 lat (msec) : 20=0.99%, 50=2.66%, 100=50.56%, 250=44.80%, 500=0.99% 00:37:57.012 cpu : usr=41.16%, sys=0.72%, ctx=1210, majf=0, minf=9 00:37:57.012 IO depths : 1=3.3%, 2=7.2%, 4=18.6%, 8=61.6%, 16=9.3%, 32=0.0%, >=64=0.0% 00:37:57.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.012 issued rwts: total=1616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.012 filename1: (groupid=0, jobs=1): err= 0: pid=129870: Wed Nov 20 17:58:37 2024 00:37:57.012 read: IOPS=177, BW=710KiB/s (727kB/s)(7104KiB/10002msec) 00:37:57.012 slat (usec): min=7, max=4037, avg=19.01, stdev=137.14 00:37:57.013 clat (msec): min=2, max=208, avg=89.98, stdev=30.35 00:37:57.013 lat (msec): min=2, max=208, avg=89.99, stdev=30.35 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 5], 5.00th=[ 46], 10.00th=[ 65], 20.00th=[ 71], 00:37:57.013 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 89], 60.00th=[ 103], 00:37:57.013 | 70.00th=[ 106], 80.00th=[ 112], 90.00th=[ 122], 95.00th=[ 138], 00:37:57.013 | 99.00th=[ 171], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 00:37:57.013 | 99.99th=[ 209] 00:37:57.013 bw ( KiB/s): min= 496, max= 896, per=3.80%, avg=673.68, stdev=132.06, samples=19 00:37:57.013 iops : min= 124, max= 224, avg=168.42, stdev=33.02, samples=19 00:37:57.013 lat (msec) : 4=0.90%, 10=1.80%, 20=1.80%, 50=1.80%, 100=51.69% 00:37:57.013 lat (msec) : 250=42.00% 00:37:57.013 cpu : usr=47.08%, sys=0.77%, ctx=1293, majf=0, minf=9 00:37:57.013 IO depths : 1=3.7%, 2=8.7%, 4=21.1%, 8=57.7%, 16=8.8%, 32=0.0%, >=64=0.0% 00:37:57.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 complete : 0=0.0%, 4=93.1%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.013 filename1: (groupid=0, jobs=1): err= 0: pid=129871: Wed Nov 20 17:58:37 2024 00:37:57.013 read: IOPS=163, BW=652KiB/s (668kB/s)(6528KiB/10008msec) 00:37:57.013 slat (usec): min=5, max=8023, avg=18.23, stdev=198.45 00:37:57.013 clat (msec): min=11, max=309, avg=97.95, stdev=34.43 00:37:57.013 lat (msec): min=11, max=309, avg=97.97, stdev=34.43 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 16], 5.00th=[ 60], 10.00th=[ 70], 20.00th=[ 72], 00:37:57.013 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 96], 60.00th=[ 107], 00:37:57.013 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 146], 00:37:57.013 | 99.00th=[ 205], 99.50th=[ 284], 99.90th=[ 309], 99.95th=[ 309], 00:37:57.013 | 99.99th=[ 309] 00:37:57.013 bw ( KiB/s): min= 384, max= 896, per=3.56%, avg=632.05, stdev=136.89, samples=19 00:37:57.013 iops : min= 96, max= 224, avg=158.00, stdev=34.21, samples=19 00:37:57.013 lat (msec) : 20=1.96%, 50=2.21%, 100=52.63%, 250=42.22%, 500=0.98% 00:37:57.013 cpu : usr=34.16%, sys=0.61%, ctx=934, majf=0, minf=9 00:37:57.013 IO depths : 1=3.0%, 2=6.9%, 4=17.9%, 8=62.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:37:57.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 complete : 0=0.0%, 4=92.1%, 8=2.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.013 filename1: (groupid=0, jobs=1): err= 0: pid=129872: Wed Nov 20 17:58:37 2024 00:37:57.013 read: IOPS=167, BW=672KiB/s (688kB/s)(6720KiB/10002msec) 00:37:57.013 slat (usec): min=4, max=8040, avg=25.49, stdev=293.74 00:37:57.013 clat (msec): min=11, max=243, avg=95.10, stdev=31.57 00:37:57.013 lat (msec): min=11, max=243, avg=95.13, stdev=31.57 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 18], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 71], 00:37:57.013 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 102], 00:37:57.013 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 133], 95.00th=[ 146], 00:37:57.013 | 99.00th=[ 192], 99.50th=[ 228], 99.90th=[ 245], 99.95th=[ 245], 00:37:57.013 | 99.99th=[ 245] 00:37:57.013 bw ( KiB/s): min= 384, max= 896, per=3.68%, avg=652.63, stdev=133.71, samples=19 00:37:57.013 iops : min= 96, max= 224, avg=163.16, stdev=33.43, samples=19 00:37:57.013 lat (msec) : 20=1.90%, 50=1.61%, 100=55.54%, 250=40.95% 00:37:57.013 cpu : usr=35.32%, sys=0.70%, ctx=1000, majf=0, minf=9 00:37:57.013 IO depths : 1=2.8%, 2=6.4%, 4=17.1%, 8=63.5%, 16=10.2%, 32=0.0%, >=64=0.0% 00:37:57.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.013 filename1: (groupid=0, jobs=1): err= 0: pid=129873: Wed Nov 20 17:58:37 2024 00:37:57.013 read: IOPS=200, BW=801KiB/s (821kB/s)(8060KiB/10057msec) 00:37:57.013 slat (usec): min=4, max=8083, avg=20.66, stdev=253.40 00:37:57.013 clat (msec): min=33, max=191, avg=79.71, stdev=24.03 00:37:57.013 lat (msec): min=33, max=191, avg=79.73, stdev=24.03 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:37:57.013 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:37:57.013 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 121], 00:37:57.013 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 192], 00:37:57.013 | 99.99th=[ 192] 00:37:57.013 bw ( KiB/s): min= 528, max= 1152, per=4.51%, avg=799.35, stdev=158.40, samples=20 00:37:57.013 iops : min= 132, max= 288, avg=199.80, stdev=39.62, samples=20 00:37:57.013 lat (msec) : 50=10.67%, 100=71.17%, 250=18.16% 00:37:57.013 cpu : usr=41.01%, sys=0.98%, ctx=1195, majf=0, minf=9 00:37:57.013 IO depths : 1=1.7%, 2=4.0%, 4=12.4%, 8=70.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:37:57.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 issued rwts: total=2015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.013 filename1: (groupid=0, jobs=1): err= 0: pid=129874: Wed Nov 20 17:58:37 2024 00:37:57.013 read: IOPS=191, BW=765KiB/s (783kB/s)(7688KiB/10048msec) 00:37:57.013 slat (usec): min=3, max=8027, avg=32.57, stdev=408.24 00:37:57.013 clat (msec): min=35, max=198, avg=83.33, stdev=25.38 00:37:57.013 lat (msec): min=35, max=198, avg=83.37, stdev=25.37 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:37:57.013 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 85], 00:37:57.013 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 132], 00:37:57.013 | 99.00th=[ 157], 99.50th=[ 199], 99.90th=[ 199], 99.95th=[ 199], 00:37:57.013 | 99.99th=[ 199] 00:37:57.013 bw ( KiB/s): min= 509, max= 1040, per=4.30%, avg=762.25, stdev=144.35, samples=20 00:37:57.013 iops : min= 127, max= 260, avg=190.55, stdev=36.11, samples=20 00:37:57.013 lat (msec) : 50=8.95%, 100=69.25%, 250=21.80% 00:37:57.013 cpu : usr=34.06%, sys=0.58%, ctx=945, majf=0, minf=9 00:37:57.013 IO depths : 1=1.2%, 2=2.7%, 4=9.7%, 8=73.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:37:57.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 complete : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.013 filename1: (groupid=0, jobs=1): err= 0: pid=129875: Wed Nov 20 17:58:37 2024 00:37:57.013 read: IOPS=174, BW=699KiB/s (715kB/s)(6988KiB/10003msec) 00:37:57.013 slat (usec): min=4, max=8105, avg=30.51, stdev=346.65 00:37:57.013 clat (msec): min=6, max=248, avg=91.37, stdev=29.55 00:37:57.013 lat (msec): min=6, max=248, avg=91.40, stdev=29.55 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 12], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 70], 00:37:57.013 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 100], 00:37:57.013 | 70.00th=[ 107], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 138], 00:37:57.013 | 99.00th=[ 180], 99.50th=[ 218], 99.90th=[ 249], 99.95th=[ 249], 00:37:57.013 | 99.99th=[ 249] 00:37:57.013 bw ( KiB/s): min= 464, max= 896, per=3.77%, avg=668.21, stdev=115.27, samples=19 00:37:57.013 iops : min= 116, max= 224, avg=167.05, stdev=28.82, samples=19 00:37:57.013 lat (msec) : 10=0.92%, 20=1.83%, 50=1.32%, 100=56.67%, 250=39.27% 00:37:57.013 cpu : usr=39.13%, sys=0.67%, ctx=1209, majf=0, minf=9 00:37:57.013 IO depths : 1=2.8%, 2=6.4%, 4=16.5%, 8=63.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:37:57.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 complete : 0=0.0%, 4=91.9%, 8=3.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 issued rwts: total=1747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.013 filename1: (groupid=0, jobs=1): err= 0: pid=129876: Wed Nov 20 17:58:37 2024 00:37:57.013 read: IOPS=174, BW=696KiB/s (713kB/s)(6984KiB/10031msec) 00:37:57.013 slat (usec): min=5, max=8034, avg=17.60, stdev=192.12 00:37:57.013 clat (msec): min=32, max=191, avg=91.74, stdev=28.70 00:37:57.013 lat (msec): min=32, max=191, avg=91.76, stdev=28.71 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 68], 00:37:57.013 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 99], 00:37:57.013 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 123], 95.00th=[ 144], 00:37:57.013 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:37:57.013 | 99.99th=[ 192] 00:37:57.013 bw ( KiB/s): min= 472, max= 1024, per=3.90%, avg=692.00, stdev=146.84, samples=20 00:37:57.013 iops : min= 118, max= 256, avg=173.00, stdev=36.71, samples=20 00:37:57.013 lat (msec) : 50=7.85%, 100=53.04%, 250=39.12% 00:37:57.013 cpu : usr=39.11%, sys=0.63%, ctx=1150, majf=0, minf=9 00:37:57.013 IO depths : 1=1.7%, 2=3.8%, 4=13.6%, 8=69.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:37:57.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.013 issued rwts: total=1746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.013 filename2: (groupid=0, jobs=1): err= 0: pid=129877: Wed Nov 20 17:58:37 2024 00:37:57.013 read: IOPS=170, BW=684KiB/s (700kB/s)(6844KiB/10009msec) 00:37:57.013 slat (usec): min=3, max=3429, avg=16.18, stdev=82.99 00:37:57.013 clat (msec): min=12, max=270, avg=93.41, stdev=31.71 00:37:57.013 lat (msec): min=12, max=270, avg=93.43, stdev=31.71 00:37:57.013 clat percentiles (msec): 00:37:57.013 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 71], 00:37:57.013 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 89], 60.00th=[ 102], 00:37:57.013 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 127], 95.00th=[ 150], 00:37:57.013 | 99.00th=[ 188], 99.50th=[ 253], 99.90th=[ 271], 99.95th=[ 271], 00:37:57.013 | 99.99th=[ 271] 00:37:57.013 bw ( KiB/s): min= 384, max= 928, per=3.78%, avg=670.74, stdev=145.36, samples=19 00:37:57.013 iops : min= 96, max= 232, avg=167.68, stdev=36.34, samples=19 00:37:57.013 lat (msec) : 20=0.88%, 50=4.79%, 100=53.89%, 250=39.51%, 500=0.94% 00:37:57.013 cpu : usr=33.59%, sys=0.52%, ctx=1014, majf=0, minf=9 00:37:57.013 IO depths : 1=1.9%, 2=4.7%, 4=14.8%, 8=67.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 filename2: (groupid=0, jobs=1): err= 0: pid=129878: Wed Nov 20 17:58:37 2024 00:37:57.014 read: IOPS=174, BW=700KiB/s (717kB/s)(7000KiB/10004msec) 00:37:57.014 slat (usec): min=4, max=8044, avg=40.82, stdev=442.98 00:37:57.014 clat (msec): min=5, max=213, avg=91.24, stdev=30.47 00:37:57.014 lat (msec): min=5, max=213, avg=91.28, stdev=30.47 00:37:57.014 clat percentiles (msec): 00:37:57.014 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 62], 20.00th=[ 71], 00:37:57.014 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 93], 60.00th=[ 99], 00:37:57.014 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 144], 00:37:57.014 | 99.00th=[ 192], 99.50th=[ 213], 99.90th=[ 213], 99.95th=[ 213], 00:37:57.014 | 99.99th=[ 213] 00:37:57.014 bw ( KiB/s): min= 384, max= 944, per=3.83%, avg=680.00, stdev=139.56, samples=19 00:37:57.014 iops : min= 96, max= 236, avg=170.00, stdev=34.89, samples=19 00:37:57.014 lat (msec) : 10=1.31%, 20=1.43%, 50=2.91%, 100=57.03%, 250=37.31% 00:37:57.014 cpu : usr=35.17%, sys=0.63%, ctx=1010, majf=0, minf=9 00:37:57.014 IO depths : 1=2.1%, 2=4.7%, 4=13.6%, 8=68.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=1750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 filename2: (groupid=0, jobs=1): err= 0: pid=129879: Wed Nov 20 17:58:37 2024 00:37:57.014 read: IOPS=187, BW=749KiB/s (767kB/s)(7500KiB/10008msec) 00:37:57.014 slat (usec): min=7, max=4053, avg=17.56, stdev=131.26 00:37:57.014 clat (msec): min=15, max=270, avg=85.29, stdev=31.58 00:37:57.014 lat (msec): min=15, max=270, avg=85.31, stdev=31.58 00:37:57.014 clat percentiles (msec): 00:37:57.014 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:37:57.014 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 94], 00:37:57.014 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 133], 00:37:57.014 | 99.00th=[ 167], 99.50th=[ 247], 99.90th=[ 271], 99.95th=[ 271], 00:37:57.014 | 99.99th=[ 271] 00:37:57.014 bw ( KiB/s): min= 336, max= 1160, per=4.18%, avg=742.32, stdev=203.15, samples=19 00:37:57.014 iops : min= 84, max= 290, avg=185.58, stdev=50.79, samples=19 00:37:57.014 lat (msec) : 20=0.85%, 50=13.44%, 100=55.68%, 250=29.76%, 500=0.27% 00:37:57.014 cpu : usr=35.02%, sys=0.62%, ctx=1112, majf=0, minf=9 00:37:57.014 IO depths : 1=0.7%, 2=1.7%, 4=7.4%, 8=76.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=89.7%, 8=6.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=1875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 filename2: (groupid=0, jobs=1): err= 0: pid=129880: Wed Nov 20 17:58:37 2024 00:37:57.014 read: IOPS=219, BW=879KiB/s (900kB/s)(8816KiB/10034msec) 00:37:57.014 slat (usec): min=5, max=4022, avg=15.55, stdev=120.93 00:37:57.014 clat (msec): min=25, max=246, avg=72.74, stdev=26.73 00:37:57.014 lat (msec): min=25, max=246, avg=72.76, stdev=26.73 00:37:57.014 clat percentiles (msec): 00:37:57.014 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 50], 00:37:57.014 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 73], 00:37:57.014 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 116], 00:37:57.014 | 99.00th=[ 146], 99.50th=[ 211], 99.90th=[ 247], 99.95th=[ 247], 00:37:57.014 | 99.99th=[ 247] 00:37:57.014 bw ( KiB/s): min= 512, max= 1232, per=4.93%, avg=875.20, stdev=216.75, samples=20 00:37:57.014 iops : min= 128, max= 308, avg=218.80, stdev=54.19, samples=20 00:37:57.014 lat (msec) : 50=20.96%, 100=63.34%, 250=15.70% 00:37:57.014 cpu : usr=41.88%, sys=0.93%, ctx=1134, majf=0, minf=9 00:37:57.014 IO depths : 1=0.6%, 2=1.4%, 4=7.2%, 8=77.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=89.3%, 8=6.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 filename2: (groupid=0, jobs=1): err= 0: pid=129881: Wed Nov 20 17:58:37 2024 00:37:57.014 read: IOPS=199, BW=799KiB/s (818kB/s)(8028KiB/10046msec) 00:37:57.014 slat (usec): min=3, max=5034, avg=15.27, stdev=112.34 00:37:57.014 clat (msec): min=34, max=196, avg=79.91, stdev=24.30 00:37:57.014 lat (msec): min=34, max=196, avg=79.93, stdev=24.30 00:37:57.014 clat percentiles (msec): 00:37:57.014 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 62], 00:37:57.014 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:37:57.014 | 70.00th=[ 88], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 121], 00:37:57.014 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 197], 00:37:57.014 | 99.99th=[ 197] 00:37:57.014 bw ( KiB/s): min= 513, max= 1192, per=4.50%, avg=798.05, stdev=157.16, samples=20 00:37:57.014 iops : min= 128, max= 298, avg=199.50, stdev=39.31, samples=20 00:37:57.014 lat (msec) : 50=7.22%, 100=71.80%, 250=20.98% 00:37:57.014 cpu : usr=40.19%, sys=0.75%, ctx=1389, majf=0, minf=9 00:37:57.014 IO depths : 1=1.2%, 2=2.7%, 4=10.3%, 8=73.7%, 16=12.1%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 filename2: (groupid=0, jobs=1): err= 0: pid=129882: Wed Nov 20 17:58:37 2024 00:37:57.014 read: IOPS=185, BW=744KiB/s (762kB/s)(7452KiB/10018msec) 00:37:57.014 slat (nsec): min=7556, max=51791, avg=13284.53, stdev=7739.59 00:37:57.014 clat (msec): min=26, max=238, avg=85.96, stdev=31.96 00:37:57.014 lat (msec): min=26, max=238, avg=85.97, stdev=31.96 00:37:57.014 clat percentiles (msec): 00:37:57.014 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:37:57.014 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 90], 00:37:57.014 | 70.00th=[ 101], 80.00th=[ 111], 90.00th=[ 132], 95.00th=[ 144], 00:37:57.014 | 99.00th=[ 161], 99.50th=[ 203], 99.90th=[ 239], 99.95th=[ 239], 00:37:57.014 | 99.99th=[ 239] 00:37:57.014 bw ( KiB/s): min= 384, max= 1144, per=4.16%, avg=738.40, stdev=222.39, samples=20 00:37:57.014 iops : min= 96, max= 286, avg=184.60, stdev=55.60, samples=20 00:37:57.014 lat (msec) : 50=13.47%, 100=55.98%, 250=30.54% 00:37:57.014 cpu : usr=36.79%, sys=0.76%, ctx=1238, majf=0, minf=9 00:37:57.014 IO depths : 1=0.3%, 2=0.7%, 4=5.9%, 8=78.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=89.5%, 8=7.1%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=1863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 filename2: (groupid=0, jobs=1): err= 0: pid=129883: Wed Nov 20 17:58:37 2024 00:37:57.014 read: IOPS=174, BW=697KiB/s (713kB/s)(6996KiB/10042msec) 00:37:57.014 slat (usec): min=6, max=8039, avg=25.93, stdev=332.28 00:37:57.014 clat (msec): min=36, max=203, avg=91.70, stdev=27.76 00:37:57.014 lat (msec): min=36, max=203, avg=91.72, stdev=27.78 00:37:57.014 clat percentiles (msec): 00:37:57.014 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 70], 00:37:57.014 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 96], 00:37:57.014 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 132], 95.00th=[ 144], 00:37:57.014 | 99.00th=[ 165], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:37:57.014 | 99.99th=[ 205] 00:37:57.014 bw ( KiB/s): min= 384, max= 944, per=3.91%, avg=693.20, stdev=121.67, samples=20 00:37:57.014 iops : min= 96, max= 236, avg=173.30, stdev=30.42, samples=20 00:37:57.014 lat (msec) : 50=3.60%, 100=61.18%, 250=35.22% 00:37:57.014 cpu : usr=33.32%, sys=0.52%, ctx=989, majf=0, minf=9 00:37:57.014 IO depths : 1=1.3%, 2=3.0%, 4=11.8%, 8=71.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=1749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 filename2: (groupid=0, jobs=1): err= 0: pid=129884: Wed Nov 20 17:58:37 2024 00:37:57.014 read: IOPS=182, BW=730KiB/s (747kB/s)(7296KiB/10001msec) 00:37:57.014 slat (usec): min=3, max=8019, avg=24.21, stdev=281.38 00:37:57.014 clat (usec): min=1715, max=288565, avg=87528.01, stdev=34353.62 00:37:57.014 lat (usec): min=1723, max=288578, avg=87552.22, stdev=34355.03 00:37:57.014 clat percentiles (msec): 00:37:57.014 | 1.00th=[ 5], 5.00th=[ 36], 10.00th=[ 51], 20.00th=[ 67], 00:37:57.014 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 96], 00:37:57.014 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 125], 95.00th=[ 144], 00:37:57.014 | 99.00th=[ 184], 99.50th=[ 232], 99.90th=[ 234], 99.95th=[ 288], 00:37:57.014 | 99.99th=[ 288] 00:37:57.014 bw ( KiB/s): min= 384, max= 864, per=3.87%, avg=687.16, stdev=133.89, samples=19 00:37:57.014 iops : min= 96, max= 216, avg=171.79, stdev=33.47, samples=19 00:37:57.014 lat (msec) : 2=0.33%, 4=0.71%, 10=2.47%, 20=0.88%, 50=5.43% 00:37:57.014 lat (msec) : 100=57.13%, 250=33.00%, 500=0.05% 00:37:57.014 cpu : usr=39.27%, sys=0.83%, ctx=1056, majf=0, minf=9 00:37:57.014 IO depths : 1=1.9%, 2=4.7%, 4=14.8%, 8=67.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:37:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.014 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.014 00:37:57.014 Run status group 0 (all jobs): 00:37:57.014 READ: bw=17.3MiB/s (18.2MB/s), 646KiB/s-879KiB/s (662kB/s-900kB/s), io=174MiB (183MB), run=10001-10069msec 00:37:57.014 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:57.014 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:57.014 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.014 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:57.014 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 bdev_null0 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 [2024-11-20 17:58:37.316391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 bdev_null1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:57.015 { 00:37:57.015 "params": { 00:37:57.015 "name": "Nvme$subsystem", 00:37:57.015 "trtype": "$TEST_TRANSPORT", 00:37:57.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:57.015 "adrfam": "ipv4", 00:37:57.015 "trsvcid": "$NVMF_PORT", 00:37:57.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:57.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:57.015 "hdgst": ${hdgst:-false}, 00:37:57.015 "ddgst": ${ddgst:-false} 00:37:57.015 }, 00:37:57.015 "method": "bdev_nvme_attach_controller" 00:37:57.015 } 00:37:57.015 EOF 00:37:57.015 )") 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:57.015 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:57.015 { 00:37:57.015 "params": { 00:37:57.015 "name": "Nvme$subsystem", 00:37:57.015 "trtype": "$TEST_TRANSPORT", 00:37:57.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:57.015 "adrfam": "ipv4", 00:37:57.015 "trsvcid": "$NVMF_PORT", 00:37:57.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:57.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:57.016 "hdgst": ${hdgst:-false}, 00:37:57.016 "ddgst": ${ddgst:-false} 00:37:57.016 }, 00:37:57.016 "method": "bdev_nvme_attach_controller" 00:37:57.016 } 00:37:57.016 EOF 00:37:57.016 )") 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:57.016 "params": { 00:37:57.016 "name": "Nvme0", 00:37:57.016 "trtype": "tcp", 00:37:57.016 "traddr": "10.0.0.3", 00:37:57.016 "adrfam": "ipv4", 00:37:57.016 "trsvcid": "4420", 00:37:57.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.016 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:57.016 "hdgst": false, 00:37:57.016 "ddgst": false 00:37:57.016 }, 00:37:57.016 "method": "bdev_nvme_attach_controller" 00:37:57.016 },{ 00:37:57.016 "params": { 00:37:57.016 "name": "Nvme1", 00:37:57.016 "trtype": "tcp", 00:37:57.016 "traddr": "10.0.0.3", 00:37:57.016 "adrfam": "ipv4", 00:37:57.016 "trsvcid": "4420", 00:37:57.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:57.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:57.016 "hdgst": false, 00:37:57.016 "ddgst": false 00:37:57.016 }, 00:37:57.016 "method": "bdev_nvme_attach_controller" 00:37:57.016 }' 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:57.016 17:58:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:57.016 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:57.016 ... 00:37:57.016 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:57.016 ... 00:37:57.016 fio-3.35 00:37:57.016 Starting 4 threads 00:38:02.291 00:38:02.291 filename0: (groupid=0, jobs=1): err= 0: pid=130006: Wed Nov 20 17:58:43 2024 00:38:02.291 read: IOPS=1869, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5001msec) 00:38:02.291 slat (usec): min=3, max=660, avg=23.51, stdev=15.22 00:38:02.291 clat (usec): min=1195, max=7927, avg=4163.24, stdev=465.73 00:38:02.291 lat (usec): min=1214, max=7935, avg=4186.75, stdev=464.88 00:38:02.291 clat percentiles (usec): 00:38:02.291 | 1.00th=[ 3458], 5.00th=[ 3884], 10.00th=[ 3916], 20.00th=[ 3982], 00:38:02.291 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4113], 00:38:02.291 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 5145], 00:38:02.291 | 99.00th=[ 6325], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 7767], 00:38:02.291 | 99.99th=[ 7898] 00:38:02.291 bw ( KiB/s): min=14080, max=15536, per=24.78%, avg=14901.11, stdev=521.98, samples=9 00:38:02.291 iops : min= 1760, max= 1942, avg=1862.56, stdev=65.21, samples=9 00:38:02.291 lat (msec) : 2=0.30%, 4=29.01%, 10=70.69% 00:38:02.291 cpu : usr=94.40%, sys=3.82%, ctx=95, majf=0, minf=0 00:38:02.291 IO depths : 1=10.6%, 2=21.8%, 4=53.1%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:02.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 issued rwts: total=9351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:02.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:02.291 filename0: (groupid=0, jobs=1): err= 0: pid=130007: Wed Nov 20 17:58:43 2024 00:38:02.291 read: IOPS=1883, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5003msec) 00:38:02.291 slat (usec): min=6, max=105, avg=12.66, stdev= 8.93 00:38:02.291 clat (usec): min=1935, max=8994, avg=4184.70, stdev=333.63 00:38:02.291 lat (usec): min=1943, max=9019, avg=4197.36, stdev=333.19 00:38:02.291 clat percentiles (usec): 00:38:02.291 | 1.00th=[ 3720], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4047], 00:38:02.291 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:38:02.291 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 5145], 00:38:02.291 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 6718], 99.95th=[ 7570], 00:38:02.291 | 99.99th=[ 8979] 00:38:02.291 bw ( KiB/s): min=14080, max=15616, per=24.95%, avg=15001.00, stdev=532.13, samples=9 00:38:02.291 iops : min= 1760, max= 1952, avg=1875.11, stdev=66.51, samples=9 00:38:02.291 lat (msec) : 2=0.08%, 4=7.57%, 10=92.35% 00:38:02.291 cpu : usr=94.74%, sys=3.88%, ctx=8, majf=0, minf=0 00:38:02.291 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:02.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 issued rwts: total=9424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:02.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:02.291 filename1: (groupid=0, jobs=1): err= 0: pid=130008: Wed Nov 20 17:58:43 2024 00:38:02.291 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5001msec) 00:38:02.291 slat (usec): min=4, max=702, avg=21.41, stdev=12.30 00:38:02.291 clat (usec): min=1131, max=7886, avg=4152.84, stdev=343.96 00:38:02.291 lat (usec): min=1139, max=7899, avg=4174.25, stdev=342.55 00:38:02.291 clat percentiles (usec): 00:38:02.291 | 1.00th=[ 3752], 5.00th=[ 3884], 10.00th=[ 3916], 20.00th=[ 3982], 00:38:02.291 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:38:02.291 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 5080], 00:38:02.291 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 7046], 99.95th=[ 7832], 00:38:02.291 | 99.99th=[ 7898] 00:38:02.291 bw ( KiB/s): min=14080, max=15616, per=24.94%, avg=14995.56, stdev=527.14, samples=9 00:38:02.291 iops : min= 1760, max= 1952, avg=1874.44, stdev=65.89, samples=9 00:38:02.291 lat (msec) : 2=0.03%, 4=23.48%, 10=76.48% 00:38:02.291 cpu : usr=95.28%, sys=3.48%, ctx=6, majf=0, minf=0 00:38:02.291 IO depths : 1=11.2%, 2=24.8%, 4=50.2%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:02.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 issued rwts: total=9411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:02.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:02.291 filename1: (groupid=0, jobs=1): err= 0: pid=130009: Wed Nov 20 17:58:43 2024 00:38:02.291 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5003msec) 00:38:02.291 slat (nsec): min=7064, max=72853, avg=17695.83, stdev=11431.59 00:38:02.291 clat (usec): min=2831, max=7787, avg=4171.61, stdev=331.97 00:38:02.291 lat (usec): min=2840, max=7795, avg=4189.31, stdev=328.87 00:38:02.291 clat percentiles (usec): 00:38:02.291 | 1.00th=[ 3785], 5.00th=[ 3884], 10.00th=[ 3949], 20.00th=[ 4015], 00:38:02.291 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4113], 00:38:02.291 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 5145], 00:38:02.291 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 7046], 99.95th=[ 7504], 00:38:02.291 | 99.99th=[ 7767] 00:38:02.291 bw ( KiB/s): min=14080, max=15616, per=24.93%, avg=14990.22, stdev=537.16, samples=9 00:38:02.291 iops : min= 1760, max= 1952, avg=1873.78, stdev=67.14, samples=9 00:38:02.291 lat (msec) : 4=16.90%, 10=83.10% 00:38:02.291 cpu : usr=94.82%, sys=3.84%, ctx=4, majf=0, minf=1 00:38:02.291 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:02.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.291 issued rwts: total=9416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:02.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:02.291 00:38:02.291 Run status group 0 (all jobs): 00:38:02.291 READ: bw=58.7MiB/s (61.6MB/s), 14.6MiB/s-14.7MiB/s (15.3MB/s-15.4MB/s), io=294MiB (308MB), run=5001-5003msec 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:02.291 ************************************ 00:38:02.291 END TEST fio_dif_rand_params 00:38:02.291 ************************************ 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.291 00:38:02.291 real 0m23.997s 00:38:02.291 user 2m7.989s 00:38:02.291 sys 0m4.247s 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:02.291 17:58:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:02.291 17:58:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:02.291 17:58:43 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:02.291 17:58:43 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:02.291 17:58:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.291 ************************************ 00:38:02.291 START TEST fio_dif_digest 00:38:02.291 ************************************ 00:38:02.291 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:38:02.291 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:02.291 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:02.291 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:02.292 bdev_null0 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:02.292 [2024-11-20 17:58:43.632695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:02.292 { 00:38:02.292 "params": { 00:38:02.292 "name": "Nvme$subsystem", 00:38:02.292 "trtype": "$TEST_TRANSPORT", 00:38:02.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.292 "adrfam": "ipv4", 00:38:02.292 "trsvcid": "$NVMF_PORT", 00:38:02.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.292 "hdgst": ${hdgst:-false}, 00:38:02.292 "ddgst": ${ddgst:-false} 00:38:02.292 }, 00:38:02.292 "method": "bdev_nvme_attach_controller" 00:38:02.292 } 00:38:02.292 EOF 00:38:02.292 )") 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:02.292 "params": { 00:38:02.292 "name": "Nvme0", 00:38:02.292 "trtype": "tcp", 00:38:02.292 "traddr": "10.0.0.3", 00:38:02.292 "adrfam": "ipv4", 00:38:02.292 "trsvcid": "4420", 00:38:02.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:02.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:02.292 "hdgst": true, 00:38:02.292 "ddgst": true 00:38:02.292 }, 00:38:02.292 "method": "bdev_nvme_attach_controller" 00:38:02.292 }' 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:02.292 17:58:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.292 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:02.292 ... 00:38:02.292 fio-3.35 00:38:02.292 Starting 3 threads 00:38:14.498 00:38:14.498 filename0: (groupid=0, jobs=1): err= 0: pid=130110: Wed Nov 20 17:58:54 2024 00:38:14.498 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(273MiB/10003msec) 00:38:14.498 slat (nsec): min=5835, max=68092, avg=19830.87, stdev=7372.68 00:38:14.498 clat (usec): min=7195, max=18979, avg=13705.85, stdev=2404.47 00:38:14.498 lat (usec): min=7218, max=18987, avg=13725.69, stdev=2404.64 00:38:14.498 clat percentiles (usec): 00:38:14.498 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[11994], 00:38:14.498 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:38:14.498 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:38:14.498 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18744], 00:38:14.498 | 99.99th=[19006] 00:38:14.498 bw ( KiB/s): min=25344, max=31744, per=34.58%, avg=27796.21, stdev=1588.02, samples=19 00:38:14.498 iops : min= 198, max= 248, avg=217.16, stdev=12.41, samples=19 00:38:14.498 lat (msec) : 10=16.02%, 20=83.98% 00:38:14.498 cpu : usr=93.23%, sys=4.68%, ctx=86, majf=0, minf=0 00:38:14.498 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.498 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.498 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:14.498 filename0: (groupid=0, jobs=1): err= 0: pid=130111: Wed Nov 20 17:58:54 2024 00:38:14.498 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(272MiB/10009msec) 00:38:14.498 slat (nsec): min=7540, max=48893, avg=16353.62, stdev=5138.04 00:38:14.498 clat (usec): min=7065, max=94388, avg=13765.94, stdev=8593.03 00:38:14.498 lat (usec): min=7076, max=94410, avg=13782.30, stdev=8593.24 00:38:14.498 clat percentiles (usec): 00:38:14.498 | 1.00th=[10028], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:38:14.498 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:38:14.498 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[14091], 00:38:14.498 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[93848], 00:38:14.498 | 99.99th=[94897] 00:38:14.498 bw ( KiB/s): min=20736, max=33024, per=35.14%, avg=28240.84, stdev=3039.89, samples=19 00:38:14.498 iops : min= 162, max= 258, avg=220.63, stdev=23.75, samples=19 00:38:14.498 lat (msec) : 10=1.01%, 20=94.67%, 100=4.32% 00:38:14.498 cpu : usr=94.02%, sys=4.51%, ctx=8, majf=0, minf=9 00:38:14.498 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.498 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.498 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:14.498 filename0: (groupid=0, jobs=1): err= 0: pid=130112: Wed Nov 20 17:58:54 2024 00:38:14.498 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(240MiB/10004msec) 00:38:14.498 slat (nsec): min=5718, max=76626, avg=17016.66, stdev=5913.36 00:38:14.498 clat (usec): min=4596, max=21233, avg=15593.82, stdev=2466.95 00:38:14.498 lat (usec): min=4611, max=21253, avg=15610.84, stdev=2466.42 00:38:14.498 clat percentiles (usec): 00:38:14.498 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10814], 20.00th=[13960], 00:38:14.498 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:38:14.498 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:38:14.498 | 99.00th=[19268], 99.50th=[19530], 99.90th=[21103], 99.95th=[21103], 00:38:14.498 | 99.99th=[21103] 00:38:14.498 bw ( KiB/s): min=22784, max=27648, per=30.42%, avg=24454.74, stdev=1261.00, samples=19 00:38:14.498 iops : min= 178, max= 216, avg=191.05, stdev= 9.85, samples=19 00:38:14.498 lat (msec) : 10=2.24%, 20=97.61%, 50=0.16% 00:38:14.498 cpu : usr=94.46%, sys=4.07%, ctx=576, majf=0, minf=9 00:38:14.498 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.498 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.498 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:14.498 00:38:14.498 Run status group 0 (all jobs): 00:38:14.498 READ: bw=78.5MiB/s (82.3MB/s), 24.0MiB/s-27.3MiB/s (25.2MB/s-28.6MB/s), io=786MiB (824MB), run=10003-10009msec 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:14.498 ************************************ 00:38:14.498 END TEST fio_dif_digest 00:38:14.498 ************************************ 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.498 00:38:14.498 real 0m11.123s 00:38:14.498 user 0m28.882s 00:38:14.498 sys 0m1.661s 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:14.498 17:58:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:14.498 17:58:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:14.498 17:58:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:14.498 rmmod nvme_tcp 00:38:14.498 rmmod nvme_fabrics 00:38:14.498 rmmod nvme_keyring 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:14.498 17:58:54 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 129371 ']' 00:38:14.499 17:58:54 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 129371 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 129371 ']' 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 129371 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 129371 00:38:14.499 killing process with pid 129371 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 129371' 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@969 -- # kill 129371 00:38:14.499 17:58:54 nvmf_dif -- common/autotest_common.sh@974 -- # wait 129371 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:14.499 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:14.499 Waiting for block devices as requested 00:38:14.499 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:14.499 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:14.499 17:58:55 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:14.499 17:58:56 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:14.499 17:58:56 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:14.499 17:58:56 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:14.499 17:58:56 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:14.499 17:58:56 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.499 17:58:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:14.499 17:58:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.499 17:58:56 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:38:14.499 ************************************ 00:38:14.499 END TEST nvmf_dif 00:38:14.499 ************************************ 00:38:14.499 00:38:14.499 real 1m1.345s 00:38:14.499 user 3m55.344s 00:38:14.499 sys 0m13.422s 00:38:14.499 17:58:56 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:14.499 17:58:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:14.499 17:58:56 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:14.499 17:58:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:14.499 17:58:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:14.499 17:58:56 -- common/autotest_common.sh@10 -- # set +x 00:38:14.499 ************************************ 00:38:14.499 START TEST nvmf_abort_qd_sizes 00:38:14.499 ************************************ 00:38:14.499 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:14.499 * Looking for test storage... 00:38:14.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:14.499 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:14.499 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:14.499 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:14.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.759 --rc genhtml_branch_coverage=1 00:38:14.759 --rc genhtml_function_coverage=1 00:38:14.759 --rc genhtml_legend=1 00:38:14.759 --rc geninfo_all_blocks=1 00:38:14.759 --rc geninfo_unexecuted_blocks=1 00:38:14.759 00:38:14.759 ' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:14.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.759 --rc genhtml_branch_coverage=1 00:38:14.759 --rc genhtml_function_coverage=1 00:38:14.759 --rc genhtml_legend=1 00:38:14.759 --rc geninfo_all_blocks=1 00:38:14.759 --rc geninfo_unexecuted_blocks=1 00:38:14.759 00:38:14.759 ' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:14.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.759 --rc genhtml_branch_coverage=1 00:38:14.759 --rc genhtml_function_coverage=1 00:38:14.759 --rc genhtml_legend=1 00:38:14.759 --rc geninfo_all_blocks=1 00:38:14.759 --rc geninfo_unexecuted_blocks=1 00:38:14.759 00:38:14.759 ' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:14.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.759 --rc genhtml_branch_coverage=1 00:38:14.759 --rc genhtml_function_coverage=1 00:38:14.759 --rc genhtml_legend=1 00:38:14.759 --rc geninfo_all_blocks=1 00:38:14.759 --rc geninfo_unexecuted_blocks=1 00:38:14.759 00:38:14.759 ' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:14.759 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.759 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:14.760 Cannot find device "nvmf_init_br" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:14.760 Cannot find device "nvmf_init_br2" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:14.760 Cannot find device "nvmf_tgt_br" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:14.760 Cannot find device "nvmf_tgt_br2" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:14.760 Cannot find device "nvmf_init_br" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:14.760 Cannot find device "nvmf_init_br2" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:14.760 Cannot find device "nvmf_tgt_br" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:14.760 Cannot find device "nvmf_tgt_br2" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:14.760 Cannot find device "nvmf_br" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:14.760 Cannot find device "nvmf_init_if" 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:38:14.760 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:15.018 Cannot find device "nvmf_init_if2" 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:15.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:15.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:15.018 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:15.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:15.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:38:15.276 00:38:15.276 --- 10.0.0.3 ping statistics --- 00:38:15.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.276 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:15.276 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:15.276 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:38:15.276 00:38:15.276 --- 10.0.0.4 ping statistics --- 00:38:15.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.276 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:15.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:15.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:38:15.276 00:38:15.276 --- 10.0.0.1 ping statistics --- 00:38:15.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.276 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:15.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:15.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:38:15.276 00:38:15.276 --- 10.0.0.2 ping statistics --- 00:38:15.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.276 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:38:15.276 17:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:15.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:15.842 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:16.101 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:16.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=130755 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 130755 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 130755 ']' 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:16.101 17:58:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:16.101 [2024-11-20 17:58:57.860298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:16.101 [2024-11-20 17:58:57.860604] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:16.360 [2024-11-20 17:58:58.003908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:16.360 [2024-11-20 17:58:58.130430] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:16.360 [2024-11-20 17:58:58.130804] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:16.360 [2024-11-20 17:58:58.130826] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:16.360 [2024-11-20 17:58:58.130837] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:16.360 [2024-11-20 17:58:58.130845] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:16.360 [2024-11-20 17:58:58.130960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.360 [2024-11-20 17:58:58.131108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:16.360 [2024-11-20 17:58:58.131784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:16.360 [2024-11-20 17:58:58.131817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:38:17.300 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:38:17.301 17:58:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:38:17.301 17:58:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:38:17.301 17:58:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:17.301 17:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:17.301 17:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:17.301 17:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:17.301 ************************************ 00:38:17.301 START TEST spdk_target_abort 00:38:17.301 ************************************ 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:17.301 spdk_targetn1 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.301 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:17.559 [2024-11-20 17:58:59.089218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:17.559 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.559 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:17.559 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.559 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:17.559 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.559 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:17.560 [2024-11-20 17:58:59.126485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:17.560 17:58:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:20.843 Initializing NVMe Controllers 00:38:20.843 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:38:20.843 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:20.843 Initialization complete. Launching workers. 00:38:20.843 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11131, failed: 0 00:38:20.843 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1134, failed to submit 9997 00:38:20.843 success 793, unsuccessful 341, failed 0 00:38:20.843 17:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:20.843 17:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:24.124 Initializing NVMe Controllers 00:38:24.124 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:38:24.124 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:24.124 Initialization complete. Launching workers. 00:38:24.124 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5948, failed: 0 00:38:24.124 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 4714 00:38:24.124 success 304, unsuccessful 930, failed 0 00:38:24.124 17:59:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:24.124 17:59:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:27.408 Initializing NVMe Controllers 00:38:27.408 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:38:27.408 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:27.408 Initialization complete. Launching workers. 00:38:27.408 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30075, failed: 0 00:38:27.408 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2612, failed to submit 27463 00:38:27.408 success 414, unsuccessful 2198, failed 0 00:38:27.408 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:27.408 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.408 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.408 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.408 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:27.408 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.408 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 130755 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 130755 ']' 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 130755 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130755 00:38:27.973 killing process with pid 130755 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130755' 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 130755 00:38:27.973 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 130755 00:38:28.232 ************************************ 00:38:28.232 END TEST spdk_target_abort 00:38:28.232 ************************************ 00:38:28.232 00:38:28.232 real 0m10.762s 00:38:28.232 user 0m44.122s 00:38:28.232 sys 0m1.614s 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:28.232 17:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:28.232 17:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:28.232 17:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:28.232 17:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:28.232 ************************************ 00:38:28.232 START TEST kernel_target_abort 00:38:28.232 ************************************ 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:28.232 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:28.233 17:59:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:28.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:28.491 Waiting for block devices as requested 00:38:28.749 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:28.749 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:28.749 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:38:28.749 No valid GPT data, bailing 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:38:29.007 No valid GPT data, bailing 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:38:29.007 No valid GPT data, bailing 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:38:29.007 No valid GPT data, bailing 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:38:29.007 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e --hostid=331739d4-7eee-415c-8218-055dbccf3f5e -a 10.0.0.1 -t tcp -s 4420 00:38:29.266 00:38:29.266 Discovery Log Number of Records 2, Generation counter 2 00:38:29.266 =====Discovery Log Entry 0====== 00:38:29.266 trtype: tcp 00:38:29.266 adrfam: ipv4 00:38:29.266 subtype: current discovery subsystem 00:38:29.266 treq: not specified, sq flow control disable supported 00:38:29.266 portid: 1 00:38:29.266 trsvcid: 4420 00:38:29.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:29.266 traddr: 10.0.0.1 00:38:29.266 eflags: none 00:38:29.266 sectype: none 00:38:29.266 =====Discovery Log Entry 1====== 00:38:29.266 trtype: tcp 00:38:29.266 adrfam: ipv4 00:38:29.266 subtype: nvme subsystem 00:38:29.266 treq: not specified, sq flow control disable supported 00:38:29.266 portid: 1 00:38:29.266 trsvcid: 4420 00:38:29.266 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:29.266 traddr: 10.0.0.1 00:38:29.266 eflags: none 00:38:29.266 sectype: none 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:29.266 17:59:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:32.549 Initializing NVMe Controllers 00:38:32.549 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:32.549 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:32.550 Initialization complete. Launching workers. 00:38:32.550 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31261, failed: 0 00:38:32.550 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31261, failed to submit 0 00:38:32.550 success 0, unsuccessful 31261, failed 0 00:38:32.550 17:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:32.550 17:59:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:35.833 Initializing NVMe Controllers 00:38:35.834 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:35.834 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:35.834 Initialization complete. Launching workers. 00:38:35.834 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67978, failed: 0 00:38:35.834 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29779, failed to submit 38199 00:38:35.834 success 0, unsuccessful 29779, failed 0 00:38:35.834 17:59:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:35.834 17:59:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:39.118 Initializing NVMe Controllers 00:38:39.118 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:39.118 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:39.118 Initialization complete. Launching workers. 00:38:39.118 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80398, failed: 0 00:38:39.118 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20062, failed to submit 60336 00:38:39.118 success 0, unsuccessful 20062, failed 0 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:38:39.118 17:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:39.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:40.311 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:40.569 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:40.569 00:38:40.569 real 0m12.352s 00:38:40.569 user 0m6.355s 00:38:40.569 sys 0m3.435s 00:38:40.569 17:59:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:40.569 17:59:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:40.569 ************************************ 00:38:40.569 END TEST kernel_target_abort 00:38:40.569 ************************************ 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:40.569 rmmod nvme_tcp 00:38:40.569 rmmod nvme_fabrics 00:38:40.569 rmmod nvme_keyring 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 130755 ']' 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 130755 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 130755 ']' 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 130755 00:38:40.569 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (130755) - No such process 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 130755 is not found' 00:38:40.569 Process with pid 130755 is not found 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:38:40.569 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:41.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:41.148 Waiting for block devices as requested 00:38:41.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:41.148 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:41.415 17:59:22 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:38:41.415 00:38:41.415 real 0m27.033s 00:38:41.415 user 0m51.893s 00:38:41.415 sys 0m6.578s 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.415 17:59:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:41.415 ************************************ 00:38:41.415 END TEST nvmf_abort_qd_sizes 00:38:41.415 ************************************ 00:38:41.674 17:59:23 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:38:41.674 17:59:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:41.674 17:59:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.674 17:59:23 -- common/autotest_common.sh@10 -- # set +x 00:38:41.674 ************************************ 00:38:41.674 START TEST keyring_file 00:38:41.674 ************************************ 00:38:41.674 17:59:23 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:38:41.674 * Looking for test storage... 00:38:41.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.675 --rc genhtml_branch_coverage=1 00:38:41.675 --rc genhtml_function_coverage=1 00:38:41.675 --rc genhtml_legend=1 00:38:41.675 --rc geninfo_all_blocks=1 00:38:41.675 --rc geninfo_unexecuted_blocks=1 00:38:41.675 00:38:41.675 ' 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.675 --rc genhtml_branch_coverage=1 00:38:41.675 --rc genhtml_function_coverage=1 00:38:41.675 --rc genhtml_legend=1 00:38:41.675 --rc geninfo_all_blocks=1 00:38:41.675 --rc geninfo_unexecuted_blocks=1 00:38:41.675 00:38:41.675 ' 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.675 --rc genhtml_branch_coverage=1 00:38:41.675 --rc genhtml_function_coverage=1 00:38:41.675 --rc genhtml_legend=1 00:38:41.675 --rc geninfo_all_blocks=1 00:38:41.675 --rc geninfo_unexecuted_blocks=1 00:38:41.675 00:38:41.675 ' 00:38:41.675 17:59:23 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.675 --rc genhtml_branch_coverage=1 00:38:41.675 --rc genhtml_function_coverage=1 00:38:41.675 --rc genhtml_legend=1 00:38:41.675 --rc geninfo_all_blocks=1 00:38:41.675 --rc geninfo_unexecuted_blocks=1 00:38:41.675 00:38:41.675 ' 00:38:41.675 17:59:23 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:38:41.675 17:59:23 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.675 17:59:23 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.675 17:59:23 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.934 17:59:23 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.934 17:59:23 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.934 17:59:23 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.934 17:59:23 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.935 17:59:23 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.935 17:59:23 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:41.935 17:59:23 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:41.935 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.flKw6AIFhg 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@729 -- # python - 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.flKw6AIFhg 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.flKw6AIFhg 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.flKw6AIFhg 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AC6aHX6f2g 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:38:41.935 17:59:23 keyring_file -- nvmf/common.sh@729 -- # python - 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AC6aHX6f2g 00:38:41.935 17:59:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AC6aHX6f2g 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AC6aHX6f2g 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@30 -- # tgtpid=131669 00:38:41.935 17:59:23 keyring_file -- keyring/file.sh@32 -- # waitforlisten 131669 00:38:41.935 17:59:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 131669 ']' 00:38:41.935 17:59:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.935 17:59:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:41.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.935 17:59:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.935 17:59:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:41.935 17:59:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.935 [2024-11-20 17:59:23.671692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:41.935 [2024-11-20 17:59:23.671828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131669 ] 00:38:42.193 [2024-11-20 17:59:23.814614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.193 [2024-11-20 17:59:23.953320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:43.129 17:59:24 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:43.129 [2024-11-20 17:59:24.805173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:43.129 null0 00:38:43.129 [2024-11-20 17:59:24.837101] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:43.129 [2024-11-20 17:59:24.837338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.129 17:59:24 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:43.129 [2024-11-20 17:59:24.865088] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:43.129 2024/11/20 17:59:24 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:38:43.129 request: 00:38:43.129 { 00:38:43.129 "method": "nvmf_subsystem_add_listener", 00:38:43.129 "params": { 00:38:43.129 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.129 "secure_channel": false, 00:38:43.129 "listen_address": { 00:38:43.129 "trtype": "tcp", 00:38:43.129 "traddr": "127.0.0.1", 00:38:43.129 "trsvcid": "4420" 00:38:43.129 } 00:38:43.129 } 00:38:43.129 } 00:38:43.129 Got JSON-RPC error response 00:38:43.129 GoRPCClient: error on JSON-RPC call 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:43.129 17:59:24 keyring_file -- keyring/file.sh@47 -- # bperfpid=131700 00:38:43.129 17:59:24 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:43.129 17:59:24 keyring_file -- keyring/file.sh@49 -- # waitforlisten 131700 /var/tmp/bperf.sock 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 131700 ']' 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:43.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:43.129 17:59:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:43.420 [2024-11-20 17:59:24.935779] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:43.420 [2024-11-20 17:59:24.935878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131700 ] 00:38:43.420 [2024-11-20 17:59:25.074981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.678 [2024-11-20 17:59:25.211742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:44.245 17:59:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:44.245 17:59:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:44.245 17:59:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:44.245 17:59:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:44.811 17:59:26 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AC6aHX6f2g 00:38:44.811 17:59:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AC6aHX6f2g 00:38:44.811 17:59:26 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:44.811 17:59:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:44.811 17:59:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.811 17:59:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:44.811 17:59:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.378 17:59:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.flKw6AIFhg == \/\t\m\p\/\t\m\p\.\f\l\K\w\6\A\I\F\h\g ]] 00:38:45.378 17:59:26 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:45.378 17:59:26 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:45.378 17:59:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.378 17:59:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.378 17:59:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:45.636 17:59:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.AC6aHX6f2g == \/\t\m\p\/\t\m\p\.\A\C\6\a\H\X\6\f\2\g ]] 00:38:45.636 17:59:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:45.636 17:59:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:45.636 17:59:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.636 17:59:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.636 17:59:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.636 17:59:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.894 17:59:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:45.894 17:59:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:45.894 17:59:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:45.894 17:59:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.894 17:59:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.894 17:59:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.894 17:59:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:46.461 17:59:27 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:46.461 17:59:27 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:46.461 17:59:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:46.461 [2024-11-20 17:59:28.198509] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:46.719 nvme0n1 00:38:46.719 17:59:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:46.719 17:59:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:46.719 17:59:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.719 17:59:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.719 17:59:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.719 17:59:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:46.978 17:59:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:46.978 17:59:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:46.978 17:59:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:46.978 17:59:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.978 17:59:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.978 17:59:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.978 17:59:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:47.236 17:59:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:47.236 17:59:28 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:47.495 Running I/O for 1 seconds... 00:38:48.459 11335.00 IOPS, 44.28 MiB/s 00:38:48.459 Latency(us) 00:38:48.459 [2024-11-20T17:59:30.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.459 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:48.459 nvme0n1 : 1.01 11380.84 44.46 0.00 0.00 11213.05 5213.09 18945.86 00:38:48.459 [2024-11-20T17:59:30.247Z] =================================================================================================================== 00:38:48.459 [2024-11-20T17:59:30.247Z] Total : 11380.84 44.46 0.00 0.00 11213.05 5213.09 18945.86 00:38:48.459 { 00:38:48.459 "results": [ 00:38:48.459 { 00:38:48.459 "job": "nvme0n1", 00:38:48.459 "core_mask": "0x2", 00:38:48.459 "workload": "randrw", 00:38:48.459 "percentage": 50, 00:38:48.459 "status": "finished", 00:38:48.459 "queue_depth": 128, 00:38:48.459 "io_size": 4096, 00:38:48.459 "runtime": 1.007307, 00:38:48.459 "iops": 11380.840200653822, 00:38:48.459 "mibps": 44.45640703380399, 00:38:48.459 "io_failed": 0, 00:38:48.459 "io_timeout": 0, 00:38:48.459 "avg_latency_us": 11213.05270316564, 00:38:48.459 "min_latency_us": 5213.090909090909, 00:38:48.459 "max_latency_us": 18945.861818181816 00:38:48.459 } 00:38:48.459 ], 00:38:48.459 "core_count": 1 00:38:48.459 } 00:38:48.459 17:59:30 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:48.459 17:59:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:48.716 17:59:30 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:48.716 17:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:48.716 17:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.716 17:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.716 17:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:48.716 17:59:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.976 17:59:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:48.976 17:59:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:48.976 17:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:48.976 17:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.976 17:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.976 17:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:48.976 17:59:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.545 17:59:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:49.545 17:59:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:49.545 17:59:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:49.545 17:59:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:49.545 17:59:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:49.545 17:59:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:49.545 17:59:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:49.545 17:59:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:49.545 17:59:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:49.545 17:59:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:49.545 [2024-11-20 17:59:31.322944] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:49.545 [2024-11-20 17:59:31.323134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa917b0 (107): Transport endpoint is not connected 00:38:49.545 [2024-11-20 17:59:31.324125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa917b0 (9): Bad file descriptor 00:38:49.545 [2024-11-20 17:59:31.325120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:49.545 [2024-11-20 17:59:31.325145] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:49.545 [2024-11-20 17:59:31.325157] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:49.545 [2024-11-20 17:59:31.325170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:49.545 2024/11/20 17:59:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:38:49.545 request: 00:38:49.545 { 00:38:49.545 "method": "bdev_nvme_attach_controller", 00:38:49.545 "params": { 00:38:49.545 "name": "nvme0", 00:38:49.545 "trtype": "tcp", 00:38:49.545 "traddr": "127.0.0.1", 00:38:49.545 "adrfam": "ipv4", 00:38:49.545 "trsvcid": "4420", 00:38:49.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:49.545 "prchk_reftag": false, 00:38:49.545 "prchk_guard": false, 00:38:49.545 "hdgst": false, 00:38:49.545 "ddgst": false, 00:38:49.545 "psk": "key1", 00:38:49.545 "allow_unrecognized_csi": false 00:38:49.545 } 00:38:49.545 } 00:38:49.545 Got JSON-RPC error response 00:38:49.545 GoRPCClient: error on JSON-RPC call 00:38:49.806 17:59:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:49.806 17:59:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:49.806 17:59:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:49.806 17:59:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:49.806 17:59:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:49.806 17:59:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:49.806 17:59:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:49.806 17:59:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:49.806 17:59:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.806 17:59:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:50.065 17:59:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:50.065 17:59:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:50.065 17:59:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:50.065 17:59:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.065 17:59:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.065 17:59:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:50.065 17:59:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.324 17:59:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:50.325 17:59:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:50.325 17:59:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:50.583 17:59:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:50.583 17:59:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:50.842 17:59:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:50.842 17:59:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:50.842 17:59:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.410 17:59:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:51.410 17:59:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.flKw6AIFhg 00:38:51.410 17:59:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:51.410 17:59:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:51.410 17:59:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:51.410 17:59:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:51.410 17:59:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:51.410 17:59:32 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:51.410 17:59:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:51.410 17:59:32 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:51.410 17:59:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:51.410 [2024-11-20 17:59:33.173269] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.flKw6AIFhg': 0100660 00:38:51.410 [2024-11-20 17:59:33.173323] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:51.410 2024/11/20 17:59:33 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.flKw6AIFhg], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:38:51.410 request: 00:38:51.410 { 00:38:51.410 "method": "keyring_file_add_key", 00:38:51.410 "params": { 00:38:51.410 "name": "key0", 00:38:51.410 "path": "/tmp/tmp.flKw6AIFhg" 00:38:51.410 } 00:38:51.410 } 00:38:51.410 Got JSON-RPC error response 00:38:51.410 GoRPCClient: error on JSON-RPC call 00:38:51.410 17:59:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:51.410 17:59:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:51.410 17:59:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:51.410 17:59:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:51.410 17:59:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.flKw6AIFhg 00:38:51.670 17:59:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:51.670 17:59:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.flKw6AIFhg 00:38:51.670 17:59:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.flKw6AIFhg 00:38:51.929 17:59:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:51.929 17:59:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:51.929 17:59:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:51.929 17:59:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.929 17:59:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:51.930 17:59:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.189 17:59:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:52.189 17:59:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.189 17:59:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:52.189 17:59:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.189 17:59:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:52.189 17:59:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:52.189 17:59:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:52.189 17:59:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:52.189 17:59:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.189 17:59:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.448 [2024-11-20 17:59:34.033518] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.flKw6AIFhg': No such file or directory 00:38:52.448 [2024-11-20 17:59:34.033589] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:52.448 [2024-11-20 17:59:34.033622] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:52.448 [2024-11-20 17:59:34.033649] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:52.448 [2024-11-20 17:59:34.033661] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:52.448 [2024-11-20 17:59:34.033671] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:52.448 2024/11/20 17:59:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:38:52.448 request: 00:38:52.448 { 00:38:52.448 "method": "bdev_nvme_attach_controller", 00:38:52.448 "params": { 00:38:52.448 "name": "nvme0", 00:38:52.448 "trtype": "tcp", 00:38:52.448 "traddr": "127.0.0.1", 00:38:52.448 "adrfam": "ipv4", 00:38:52.448 "trsvcid": "4420", 00:38:52.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:52.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:52.448 "prchk_reftag": false, 00:38:52.448 "prchk_guard": false, 00:38:52.448 "hdgst": false, 00:38:52.448 "ddgst": false, 00:38:52.448 "psk": "key0", 00:38:52.448 "allow_unrecognized_csi": false 00:38:52.448 } 00:38:52.448 } 00:38:52.448 Got JSON-RPC error response 00:38:52.448 GoRPCClient: error on JSON-RPC call 00:38:52.448 17:59:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:52.448 17:59:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:52.448 17:59:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:52.448 17:59:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:52.448 17:59:34 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:52.448 17:59:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:52.707 17:59:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LV4cj7LeJC 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:52.707 17:59:34 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:52.707 17:59:34 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:38:52.707 17:59:34 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:38:52.707 17:59:34 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:38:52.707 17:59:34 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:38:52.707 17:59:34 keyring_file -- nvmf/common.sh@729 -- # python - 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LV4cj7LeJC 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LV4cj7LeJC 00:38:52.707 17:59:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.LV4cj7LeJC 00:38:52.707 17:59:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LV4cj7LeJC 00:38:52.707 17:59:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LV4cj7LeJC 00:38:52.966 17:59:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.966 17:59:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:53.532 nvme0n1 00:38:53.532 17:59:35 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:53.532 17:59:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:53.532 17:59:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:53.532 17:59:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:53.532 17:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.532 17:59:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:53.791 17:59:35 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:53.791 17:59:35 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:53.791 17:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:54.050 17:59:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:54.050 17:59:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:54.050 17:59:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.050 17:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.050 17:59:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:54.308 17:59:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:54.308 17:59:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:54.308 17:59:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:54.308 17:59:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.308 17:59:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.308 17:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.308 17:59:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:54.567 17:59:36 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:54.567 17:59:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:54.567 17:59:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:54.825 17:59:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:54.825 17:59:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:54.825 17:59:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.084 17:59:36 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:55.084 17:59:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LV4cj7LeJC 00:38:55.084 17:59:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LV4cj7LeJC 00:38:55.343 17:59:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AC6aHX6f2g 00:38:55.343 17:59:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AC6aHX6f2g 00:38:55.601 17:59:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:55.601 17:59:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:55.860 nvme0n1 00:38:55.860 17:59:37 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:55.860 17:59:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:56.428 17:59:37 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:56.428 "subsystems": [ 00:38:56.428 { 00:38:56.429 "subsystem": "keyring", 00:38:56.429 "config": [ 00:38:56.429 { 00:38:56.429 "method": "keyring_file_add_key", 00:38:56.429 "params": { 00:38:56.429 "name": "key0", 00:38:56.429 "path": "/tmp/tmp.LV4cj7LeJC" 00:38:56.429 } 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "method": "keyring_file_add_key", 00:38:56.429 "params": { 00:38:56.429 "name": "key1", 00:38:56.429 "path": "/tmp/tmp.AC6aHX6f2g" 00:38:56.429 } 00:38:56.429 } 00:38:56.429 ] 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "subsystem": "iobuf", 00:38:56.429 "config": [ 00:38:56.429 { 00:38:56.429 "method": "iobuf_set_options", 00:38:56.429 "params": { 00:38:56.429 "large_bufsize": 135168, 00:38:56.429 "large_pool_count": 1024, 00:38:56.429 "small_bufsize": 8192, 00:38:56.429 "small_pool_count": 8192 00:38:56.429 } 00:38:56.429 } 00:38:56.429 ] 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "subsystem": "sock", 00:38:56.429 "config": [ 00:38:56.429 { 00:38:56.429 "method": "sock_set_default_impl", 00:38:56.429 "params": { 00:38:56.429 "impl_name": "posix" 00:38:56.429 } 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "method": "sock_impl_set_options", 00:38:56.429 "params": { 00:38:56.429 "enable_ktls": false, 00:38:56.429 "enable_placement_id": 0, 00:38:56.429 "enable_quickack": false, 00:38:56.429 "enable_recv_pipe": true, 00:38:56.429 "enable_zerocopy_send_client": false, 00:38:56.429 "enable_zerocopy_send_server": true, 00:38:56.429 "impl_name": "ssl", 00:38:56.429 "recv_buf_size": 4096, 00:38:56.429 "send_buf_size": 4096, 00:38:56.429 "tls_version": 0, 00:38:56.429 "zerocopy_threshold": 0 00:38:56.429 } 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "method": "sock_impl_set_options", 00:38:56.429 "params": { 00:38:56.429 "enable_ktls": false, 00:38:56.429 "enable_placement_id": 0, 00:38:56.429 "enable_quickack": false, 00:38:56.429 "enable_recv_pipe": true, 00:38:56.429 "enable_zerocopy_send_client": false, 00:38:56.429 "enable_zerocopy_send_server": true, 00:38:56.429 "impl_name": "posix", 00:38:56.429 "recv_buf_size": 2097152, 00:38:56.429 "send_buf_size": 2097152, 00:38:56.429 "tls_version": 0, 00:38:56.429 "zerocopy_threshold": 0 00:38:56.429 } 00:38:56.429 } 00:38:56.429 ] 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "subsystem": "vmd", 00:38:56.429 "config": [] 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "subsystem": "accel", 00:38:56.429 "config": [ 00:38:56.429 { 00:38:56.429 "method": "accel_set_options", 00:38:56.429 "params": { 00:38:56.429 "buf_count": 2048, 00:38:56.429 "large_cache_size": 16, 00:38:56.429 "sequence_count": 2048, 00:38:56.429 "small_cache_size": 128, 00:38:56.429 "task_count": 2048 00:38:56.429 } 00:38:56.429 } 00:38:56.429 ] 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "subsystem": "bdev", 00:38:56.429 "config": [ 00:38:56.429 { 00:38:56.429 "method": "bdev_set_options", 00:38:56.429 "params": { 00:38:56.429 "bdev_auto_examine": true, 00:38:56.429 "bdev_io_cache_size": 256, 00:38:56.429 "bdev_io_pool_size": 65535, 00:38:56.429 "iobuf_large_cache_size": 16, 00:38:56.429 "iobuf_small_cache_size": 128 00:38:56.429 } 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "method": "bdev_raid_set_options", 00:38:56.429 "params": { 00:38:56.429 "process_max_bandwidth_mb_sec": 0, 00:38:56.429 "process_window_size_kb": 1024 00:38:56.429 } 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "method": "bdev_iscsi_set_options", 00:38:56.429 "params": { 00:38:56.429 "timeout_sec": 30 00:38:56.429 } 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "method": "bdev_nvme_set_options", 00:38:56.429 "params": { 00:38:56.429 "action_on_timeout": "none", 00:38:56.429 "allow_accel_sequence": false, 00:38:56.429 "arbitration_burst": 0, 00:38:56.429 "bdev_retry_count": 3, 00:38:56.429 "ctrlr_loss_timeout_sec": 0, 00:38:56.429 "delay_cmd_submit": true, 00:38:56.429 "dhchap_dhgroups": [ 00:38:56.429 "null", 00:38:56.429 "ffdhe2048", 00:38:56.429 "ffdhe3072", 00:38:56.429 "ffdhe4096", 00:38:56.429 "ffdhe6144", 00:38:56.429 "ffdhe8192" 00:38:56.429 ], 00:38:56.429 "dhchap_digests": [ 00:38:56.429 "sha256", 00:38:56.429 "sha384", 00:38:56.429 "sha512" 00:38:56.429 ], 00:38:56.429 "disable_auto_failback": false, 00:38:56.429 "fast_io_fail_timeout_sec": 0, 00:38:56.429 "generate_uuids": false, 00:38:56.429 "high_priority_weight": 0, 00:38:56.429 "io_path_stat": false, 00:38:56.429 "io_queue_requests": 512, 00:38:56.429 "keep_alive_timeout_ms": 10000, 00:38:56.429 "low_priority_weight": 0, 00:38:56.429 "medium_priority_weight": 0, 00:38:56.429 "nvme_adminq_poll_period_us": 10000, 00:38:56.429 "nvme_error_stat": false, 00:38:56.429 "nvme_ioq_poll_period_us": 0, 00:38:56.429 "rdma_cm_event_timeout_ms": 0, 00:38:56.429 "rdma_max_cq_size": 0, 00:38:56.429 "rdma_srq_size": 0, 00:38:56.429 "reconnect_delay_sec": 0, 00:38:56.429 "timeout_admin_us": 0, 00:38:56.429 "timeout_us": 0, 00:38:56.429 "transport_ack_timeout": 0, 00:38:56.429 "transport_retry_count": 4, 00:38:56.429 "transport_tos": 0 00:38:56.429 } 00:38:56.429 }, 00:38:56.429 { 00:38:56.429 "method": "bdev_nvme_attach_controller", 00:38:56.429 "params": { 00:38:56.429 "adrfam": "IPv4", 00:38:56.429 "ctrlr_loss_timeout_sec": 0, 00:38:56.430 "ddgst": false, 00:38:56.430 "fast_io_fail_timeout_sec": 0, 00:38:56.430 "hdgst": false, 00:38:56.430 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:56.430 "name": "nvme0", 00:38:56.430 "prchk_guard": false, 00:38:56.430 "prchk_reftag": false, 00:38:56.430 "psk": "key0", 00:38:56.430 "reconnect_delay_sec": 0, 00:38:56.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.430 "traddr": "127.0.0.1", 00:38:56.430 "trsvcid": "4420", 00:38:56.430 "trtype": "TCP" 00:38:56.430 } 00:38:56.430 }, 00:38:56.430 { 00:38:56.430 "method": "bdev_nvme_set_hotplug", 00:38:56.430 "params": { 00:38:56.430 "enable": false, 00:38:56.430 "period_us": 100000 00:38:56.430 } 00:38:56.430 }, 00:38:56.430 { 00:38:56.430 "method": "bdev_wait_for_examine" 00:38:56.430 } 00:38:56.430 ] 00:38:56.430 }, 00:38:56.430 { 00:38:56.430 "subsystem": "nbd", 00:38:56.430 "config": [] 00:38:56.430 } 00:38:56.430 ] 00:38:56.430 }' 00:38:56.430 17:59:37 keyring_file -- keyring/file.sh@115 -- # killprocess 131700 00:38:56.430 17:59:37 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 131700 ']' 00:38:56.430 17:59:37 keyring_file -- common/autotest_common.sh@954 -- # kill -0 131700 00:38:56.430 17:59:37 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:56.430 17:59:37 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:56.430 17:59:37 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131700 00:38:56.430 killing process with pid 131700 00:38:56.430 Received shutdown signal, test time was about 1.000000 seconds 00:38:56.430 00:38:56.430 Latency(us) 00:38:56.430 [2024-11-20T17:59:38.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.430 [2024-11-20T17:59:38.218Z] =================================================================================================================== 00:38:56.430 [2024-11-20T17:59:38.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:56.430 17:59:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:56.430 17:59:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:56.430 17:59:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131700' 00:38:56.430 17:59:38 keyring_file -- common/autotest_common.sh@969 -- # kill 131700 00:38:56.430 17:59:38 keyring_file -- common/autotest_common.sh@974 -- # wait 131700 00:38:56.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:56.690 17:59:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=132181 00:38:56.690 17:59:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 132181 /var/tmp/bperf.sock 00:38:56.690 17:59:38 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:56.690 17:59:38 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 132181 ']' 00:38:56.690 17:59:38 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:56.690 17:59:38 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:56.690 17:59:38 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:56.690 17:59:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:56.690 "subsystems": [ 00:38:56.690 { 00:38:56.690 "subsystem": "keyring", 00:38:56.690 "config": [ 00:38:56.690 { 00:38:56.690 "method": "keyring_file_add_key", 00:38:56.690 "params": { 00:38:56.690 "name": "key0", 00:38:56.690 "path": "/tmp/tmp.LV4cj7LeJC" 00:38:56.690 } 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "method": "keyring_file_add_key", 00:38:56.690 "params": { 00:38:56.690 "name": "key1", 00:38:56.690 "path": "/tmp/tmp.AC6aHX6f2g" 00:38:56.690 } 00:38:56.690 } 00:38:56.690 ] 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "subsystem": "iobuf", 00:38:56.690 "config": [ 00:38:56.690 { 00:38:56.690 "method": "iobuf_set_options", 00:38:56.690 "params": { 00:38:56.690 "large_bufsize": 135168, 00:38:56.690 "large_pool_count": 1024, 00:38:56.690 "small_bufsize": 8192, 00:38:56.690 "small_pool_count": 8192 00:38:56.690 } 00:38:56.690 } 00:38:56.690 ] 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "subsystem": "sock", 00:38:56.690 "config": [ 00:38:56.690 { 00:38:56.690 "method": "sock_set_default_impl", 00:38:56.690 "params": { 00:38:56.690 "impl_name": "posix" 00:38:56.690 } 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "method": "sock_impl_set_options", 00:38:56.690 "params": { 00:38:56.690 "enable_ktls": false, 00:38:56.690 "enable_placement_id": 0, 00:38:56.690 "enable_quickack": false, 00:38:56.690 "enable_recv_pipe": true, 00:38:56.690 "enable_zerocopy_send_client": false, 00:38:56.690 "enable_zerocopy_send_server": true, 00:38:56.690 "impl_name": "ssl", 00:38:56.690 "recv_buf_size": 4096, 00:38:56.690 "send_buf_size": 4096, 00:38:56.690 "tls_version": 0, 00:38:56.690 "zerocopy_threshold": 0 00:38:56.690 } 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "method": "sock_impl_set_options", 00:38:56.690 "params": { 00:38:56.690 "enable_ktls": false, 00:38:56.690 "enable_placement_id": 0, 00:38:56.690 "enable_quickack": false, 00:38:56.690 "enable_recv_pipe": true, 00:38:56.690 "enable_zerocopy_send_client": false, 00:38:56.690 "enable_zerocopy_send_server": true, 00:38:56.690 "impl_name": "posix", 00:38:56.690 "recv_buf_size": 2097152, 00:38:56.690 "send_buf_size": 2097152, 00:38:56.690 "tls_version": 0, 00:38:56.690 "zerocopy_threshold": 0 00:38:56.690 } 00:38:56.690 } 00:38:56.690 ] 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "subsystem": "vmd", 00:38:56.690 "config": [] 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "subsystem": "accel", 00:38:56.690 "config": [ 00:38:56.690 { 00:38:56.690 "method": "accel_set_options", 00:38:56.690 "params": { 00:38:56.690 "buf_count": 2048, 00:38:56.690 "large_cache_size": 16, 00:38:56.690 "sequence_count": 2048, 00:38:56.690 "small_cache_size": 128, 00:38:56.690 "task_count": 2048 00:38:56.690 } 00:38:56.690 } 00:38:56.690 ] 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "subsystem": "bdev", 00:38:56.690 "config": [ 00:38:56.690 { 00:38:56.690 "method": "bdev_set_options", 00:38:56.690 "params": { 00:38:56.690 "bdev_auto_examine": true, 00:38:56.690 "bdev_io_cache_size": 256, 00:38:56.690 "bdev_io_pool_size": 65535, 00:38:56.690 "iobuf_large_cache_size": 16, 00:38:56.690 "iobuf_small_cache_size": 128 00:38:56.690 } 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "method": "bdev_raid_set_options", 00:38:56.690 "params": { 00:38:56.690 "process_max_bandwidth_mb_sec": 0, 00:38:56.690 "process_window_size_kb": 1024 00:38:56.690 } 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "method": "bdev_iscsi_set_options", 00:38:56.690 "params": { 00:38:56.690 "timeout_sec": 30 00:38:56.690 } 00:38:56.690 }, 00:38:56.690 { 00:38:56.690 "method": "bdev_nvme_set_options", 00:38:56.690 "params": { 00:38:56.690 "action_on_timeout": "none", 00:38:56.690 "allow_accel_sequence": false, 00:38:56.690 "arbitration_burst": 0, 00:38:56.690 "bdev_retry_count": 3, 00:38:56.690 "ctrlr_loss_timeout_sec": 0, 00:38:56.690 "delay_cmd_submit": true, 00:38:56.690 "dhchap_dhgroups": [ 00:38:56.690 "null", 00:38:56.690 "ffdhe2048", 00:38:56.690 "ffdhe3072", 00:38:56.690 "ffdhe4096", 00:38:56.690 "ffdhe6144", 00:38:56.690 "ffdhe8192" 00:38:56.690 ], 00:38:56.690 "dhchap_digests": [ 00:38:56.690 "sha256", 00:38:56.690 "sha384", 00:38:56.690 "sha512" 00:38:56.690 ], 00:38:56.690 "disable_auto_failback": false, 00:38:56.690 "fast_io_fail_timeout_sec": 0, 00:38:56.690 "generate_uuids": false, 00:38:56.690 "high_priority_weight": 0, 00:38:56.690 "io_path_stat": false, 00:38:56.690 "io_queue_requests": 512, 00:38:56.690 "keep_alive_timeout_ms": 10000, 00:38:56.690 "low_priority_weight": 0, 00:38:56.690 "medium_priority_weight": 0, 00:38:56.690 "nvme_adminq_poll_period_us": 10000, 00:38:56.690 "nvme_error_stat": false, 00:38:56.690 "nvme_ioq_poll_period_us": 0, 00:38:56.690 "rdma_cm_event_timeout_ms": 0, 00:38:56.690 "rdma_max_cq_size": 0, 00:38:56.690 "rdma_srq_size": 0, 00:38:56.690 "reconnect_delay_sec": 0, 00:38:56.690 "timeout_admin_us": 0, 00:38:56.691 "timeout_us": 0, 00:38:56.691 "transport_ack_timeout": 0, 00:38:56.691 "transport_retry_count": 4, 00:38:56.691 "transport_tos": 0 00:38:56.691 } 00:38:56.691 }, 00:38:56.691 { 00:38:56.691 "method": "bdev_nvme_attach_controller", 00:38:56.691 "params": { 00:38:56.691 "adrfam": "IPv4", 00:38:56.691 "ctrlr_loss_timeout_sec": 0, 00:38:56.691 "ddgst": false, 00:38:56.691 "fast_io_fail_timeout_sec": 0, 00:38:56.691 "hdgst": false, 00:38:56.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:56.691 "name": "nvme0", 00:38:56.691 "prchk_guard": false, 00:38:56.691 "prchk_reftag": false, 00:38:56.691 "psk": "key0", 00:38:56.691 "reconnect_delay_sec": 0, 00:38:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.691 "traddr": "127.0.0.1", 00:38:56.691 "trsvcid": "4420", 00:38:56.691 "trtype": "TCP" 00:38:56.691 } 00:38:56.691 }, 00:38:56.691 { 00:38:56.691 "method": "bdev_nvme_set_hotplug", 00:38:56.691 "params": { 00:38:56.691 "enable": false, 00:38:56.691 "period_us": 100000 00:38:56.691 } 00:38:56.691 }, 00:38:56.691 { 00:38:56.691 "method": "bdev_wait_for_examine" 00:38:56.691 } 00:38:56.691 ] 00:38:56.691 }, 00:38:56.691 { 00:38:56.691 "subsystem": "nbd", 00:38:56.691 "config": [] 00:38:56.691 } 00:38:56.691 ] 00:38:56.691 }' 00:38:56.691 17:59:38 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:56.691 17:59:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:56.691 [2024-11-20 17:59:38.384482] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:56.691 [2024-11-20 17:59:38.384966] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132181 ] 00:38:56.949 [2024-11-20 17:59:38.520593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.949 [2024-11-20 17:59:38.648834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.208 [2024-11-20 17:59:38.870403] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:57.776 17:59:39 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:57.776 17:59:39 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:57.776 17:59:39 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:57.776 17:59:39 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:57.776 17:59:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.034 17:59:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:58.034 17:59:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:58.034 17:59:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:58.034 17:59:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:58.034 17:59:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:58.034 17:59:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:58.034 17:59:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.602 17:59:40 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:58.602 17:59:40 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:58.602 17:59:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:58.602 17:59:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:58.602 17:59:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:58.602 17:59:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:58.602 17:59:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.860 17:59:40 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:58.860 17:59:40 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:58.860 17:59:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:58.860 17:59:40 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:59.118 17:59:40 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:59.118 17:59:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:59.118 17:59:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LV4cj7LeJC /tmp/tmp.AC6aHX6f2g 00:38:59.118 17:59:40 keyring_file -- keyring/file.sh@20 -- # killprocess 132181 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 132181 ']' 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@954 -- # kill -0 132181 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132181 00:38:59.118 killing process with pid 132181 00:38:59.118 Received shutdown signal, test time was about 1.000000 seconds 00:38:59.118 00:38:59.118 Latency(us) 00:38:59.118 [2024-11-20T17:59:40.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.118 [2024-11-20T17:59:40.906Z] =================================================================================================================== 00:38:59.118 [2024-11-20T17:59:40.906Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132181' 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@969 -- # kill 132181 00:38:59.118 17:59:40 keyring_file -- common/autotest_common.sh@974 -- # wait 132181 00:38:59.377 17:59:41 keyring_file -- keyring/file.sh@21 -- # killprocess 131669 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 131669 ']' 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 131669 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131669 00:38:59.377 killing process with pid 131669 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131669' 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@969 -- # kill 131669 00:38:59.377 17:59:41 keyring_file -- common/autotest_common.sh@974 -- # wait 131669 00:38:59.946 ************************************ 00:38:59.946 END TEST keyring_file 00:38:59.946 ************************************ 00:38:59.946 00:38:59.946 real 0m18.446s 00:38:59.946 user 0m45.542s 00:38:59.946 sys 0m3.862s 00:38:59.946 17:59:41 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:59.946 17:59:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.206 17:59:41 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:39:00.206 17:59:41 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:39:00.206 17:59:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:00.206 17:59:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:00.206 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:39:00.206 ************************************ 00:39:00.206 START TEST keyring_linux 00:39:00.206 ************************************ 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:39:00.206 Joined session keyring: 370400475 00:39:00.206 * Looking for test storage... 00:39:00.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:00.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.206 --rc genhtml_branch_coverage=1 00:39:00.206 --rc genhtml_function_coverage=1 00:39:00.206 --rc genhtml_legend=1 00:39:00.206 --rc geninfo_all_blocks=1 00:39:00.206 --rc geninfo_unexecuted_blocks=1 00:39:00.206 00:39:00.206 ' 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:00.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.206 --rc genhtml_branch_coverage=1 00:39:00.206 --rc genhtml_function_coverage=1 00:39:00.206 --rc genhtml_legend=1 00:39:00.206 --rc geninfo_all_blocks=1 00:39:00.206 --rc geninfo_unexecuted_blocks=1 00:39:00.206 00:39:00.206 ' 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:00.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.206 --rc genhtml_branch_coverage=1 00:39:00.206 --rc genhtml_function_coverage=1 00:39:00.206 --rc genhtml_legend=1 00:39:00.206 --rc geninfo_all_blocks=1 00:39:00.206 --rc geninfo_unexecuted_blocks=1 00:39:00.206 00:39:00.206 ' 00:39:00.206 17:59:41 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:00.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.206 --rc genhtml_branch_coverage=1 00:39:00.206 --rc genhtml_function_coverage=1 00:39:00.206 --rc genhtml_legend=1 00:39:00.206 --rc geninfo_all_blocks=1 00:39:00.206 --rc geninfo_unexecuted_blocks=1 00:39:00.206 00:39:00.206 ' 00:39:00.206 17:59:41 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:39:00.206 17:59:41 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:331739d4-7eee-415c-8218-055dbccf3f5e 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=331739d4-7eee-415c-8218-055dbccf3f5e 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.206 17:59:41 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.206 17:59:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.206 17:59:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.206 17:59:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.206 17:59:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:00.206 17:59:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:00.206 17:59:41 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:00.207 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:00.207 17:59:41 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:00.207 17:59:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:00.207 17:59:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:00.207 17:59:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:00.207 17:59:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:00.207 17:59:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:00.207 17:59:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:00.207 17:59:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:00.207 17:59:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:00.466 17:59:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:00.466 17:59:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:00.466 17:59:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:00.466 17:59:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:00.466 17:59:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:00.466 17:59:41 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:00.466 17:59:41 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:39:00.466 17:59:41 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:39:00.466 17:59:41 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:39:00.466 17:59:41 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:39:00.466 17:59:41 keyring_linux -- nvmf/common.sh@729 -- # python - 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:00.466 /tmp/:spdk-test:key0 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:00.466 17:59:42 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:00.466 17:59:42 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:00.466 17:59:42 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:39:00.466 17:59:42 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:39:00.466 17:59:42 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:39:00.466 17:59:42 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:39:00.466 17:59:42 keyring_linux -- nvmf/common.sh@729 -- # python - 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:00.466 /tmp/:spdk-test:key1 00:39:00.466 17:59:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:00.466 17:59:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=132350 00:39:00.466 17:59:42 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:00.466 17:59:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 132350 00:39:00.466 17:59:42 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 132350 ']' 00:39:00.466 17:59:42 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.466 17:59:42 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:00.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.466 17:59:42 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.466 17:59:42 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:00.466 17:59:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:00.466 [2024-11-20 17:59:42.184195] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:00.467 [2024-11-20 17:59:42.184335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132350 ] 00:39:00.726 [2024-11-20 17:59:42.321338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.726 [2024-11-20 17:59:42.458505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:01.663 17:59:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:01.663 [2024-11-20 17:59:43.285199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.663 null0 00:39:01.663 [2024-11-20 17:59:43.317190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:01.663 [2024-11-20 17:59:43.317403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.663 17:59:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:01.663 1040777306 00:39:01.663 17:59:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:01.663 162918450 00:39:01.663 17:59:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=132385 00:39:01.663 17:59:43 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:01.663 17:59:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 132385 /var/tmp/bperf.sock 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 132385 ']' 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:01.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:01.663 17:59:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:01.663 [2024-11-20 17:59:43.398001] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:01.663 [2024-11-20 17:59:43.398098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132385 ] 00:39:01.922 [2024-11-20 17:59:43.536023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.922 [2024-11-20 17:59:43.649802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.858 17:59:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:02.858 17:59:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:02.858 17:59:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:02.858 17:59:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:03.117 17:59:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:03.117 17:59:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:03.376 17:59:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:03.376 17:59:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:03.635 [2024-11-20 17:59:45.402002] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:03.895 nvme0n1 00:39:03.895 17:59:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:03.895 17:59:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:03.895 17:59:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:03.895 17:59:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:03.895 17:59:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.895 17:59:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:04.154 17:59:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:04.154 17:59:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:04.154 17:59:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:04.154 17:59:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:04.154 17:59:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.154 17:59:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.154 17:59:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:04.413 17:59:46 keyring_linux -- keyring/linux.sh@25 -- # sn=1040777306 00:39:04.413 17:59:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:04.413 17:59:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:04.413 17:59:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 1040777306 == \1\0\4\0\7\7\7\3\0\6 ]] 00:39:04.413 17:59:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1040777306 00:39:04.413 17:59:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:04.413 17:59:46 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:04.672 Running I/O for 1 seconds... 00:39:05.656 11102.00 IOPS, 43.37 MiB/s 00:39:05.656 Latency(us) 00:39:05.656 [2024-11-20T17:59:47.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.656 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:05.656 nvme0n1 : 1.01 11088.54 43.31 0.00 0.00 11466.34 4289.63 14358.34 00:39:05.656 [2024-11-20T17:59:47.444Z] =================================================================================================================== 00:39:05.656 [2024-11-20T17:59:47.444Z] Total : 11088.54 43.31 0.00 0.00 11466.34 4289.63 14358.34 00:39:05.656 { 00:39:05.656 "results": [ 00:39:05.656 { 00:39:05.656 "job": "nvme0n1", 00:39:05.656 "core_mask": "0x2", 00:39:05.656 "workload": "randread", 00:39:05.656 "status": "finished", 00:39:05.656 "queue_depth": 128, 00:39:05.656 "io_size": 4096, 00:39:05.656 "runtime": 1.012757, 00:39:05.656 "iops": 11088.543451193129, 00:39:05.656 "mibps": 43.31462285622316, 00:39:05.656 "io_failed": 0, 00:39:05.656 "io_timeout": 0, 00:39:05.656 "avg_latency_us": 11466.34100736663, 00:39:05.656 "min_latency_us": 4289.629090909091, 00:39:05.656 "max_latency_us": 14358.341818181818 00:39:05.656 } 00:39:05.656 ], 00:39:05.656 "core_count": 1 00:39:05.656 } 00:39:05.656 17:59:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:05.656 17:59:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:05.914 17:59:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:05.914 17:59:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:05.914 17:59:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:05.914 17:59:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:05.914 17:59:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:05.914 17:59:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.171 17:59:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:06.171 17:59:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:06.171 17:59:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:06.171 17:59:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.171 17:59:47 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:06.171 17:59:47 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.171 17:59:47 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:06.171 17:59:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.171 17:59:47 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:06.171 17:59:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.171 17:59:47 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.171 17:59:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.737 [2024-11-20 17:59:48.231829] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:06.737 [2024-11-20 17:59:48.232742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6b710 (107): Transport endpoint is not connected 00:39:06.737 [2024-11-20 17:59:48.233733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6b710 (9): Bad file descriptor 00:39:06.737 [2024-11-20 17:59:48.234715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:06.737 [2024-11-20 17:59:48.234738] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:06.737 [2024-11-20 17:59:48.234749] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:06.737 [2024-11-20 17:59:48.234773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:06.737 2024/11/20 17:59:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:39:06.737 request: 00:39:06.737 { 00:39:06.737 "method": "bdev_nvme_attach_controller", 00:39:06.737 "params": { 00:39:06.737 "name": "nvme0", 00:39:06.737 "trtype": "tcp", 00:39:06.737 "traddr": "127.0.0.1", 00:39:06.737 "adrfam": "ipv4", 00:39:06.737 "trsvcid": "4420", 00:39:06.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.737 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.737 "prchk_reftag": false, 00:39:06.737 "prchk_guard": false, 00:39:06.737 "hdgst": false, 00:39:06.737 "ddgst": false, 00:39:06.737 "psk": ":spdk-test:key1", 00:39:06.737 "allow_unrecognized_csi": false 00:39:06.737 } 00:39:06.737 } 00:39:06.737 Got JSON-RPC error response 00:39:06.737 GoRPCClient: error on JSON-RPC call 00:39:06.737 17:59:48 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:06.737 17:59:48 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:06.737 17:59:48 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:06.737 17:59:48 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@33 -- # sn=1040777306 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1040777306 00:39:06.737 1 links removed 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:06.737 17:59:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:06.738 17:59:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:06.738 17:59:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:06.738 17:59:48 keyring_linux -- keyring/linux.sh@33 -- # sn=162918450 00:39:06.738 17:59:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 162918450 00:39:06.738 1 links removed 00:39:06.738 17:59:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 132385 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 132385 ']' 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 132385 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132385 00:39:06.738 killing process with pid 132385 00:39:06.738 Received shutdown signal, test time was about 1.000000 seconds 00:39:06.738 00:39:06.738 Latency(us) 00:39:06.738 [2024-11-20T17:59:48.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.738 [2024-11-20T17:59:48.526Z] =================================================================================================================== 00:39:06.738 [2024-11-20T17:59:48.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132385' 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 132385 00:39:06.738 17:59:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 132385 00:39:06.996 17:59:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 132350 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 132350 ']' 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 132350 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132350 00:39:06.996 killing process with pid 132350 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132350' 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 132350 00:39:06.996 17:59:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 132350 00:39:07.564 ************************************ 00:39:07.564 END TEST keyring_linux 00:39:07.564 ************************************ 00:39:07.564 00:39:07.564 real 0m7.568s 00:39:07.564 user 0m14.533s 00:39:07.564 sys 0m1.940s 00:39:07.564 17:59:49 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:07.564 17:59:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:07.822 17:59:49 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:07.822 17:59:49 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:39:07.822 17:59:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:07.822 17:59:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:07.822 17:59:49 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:39:07.822 17:59:49 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:39:07.822 17:59:49 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:39:07.822 17:59:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:07.822 17:59:49 -- common/autotest_common.sh@10 -- # set +x 00:39:07.822 17:59:49 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:39:07.822 17:59:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:07.822 17:59:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:07.822 17:59:49 -- common/autotest_common.sh@10 -- # set +x 00:39:09.724 INFO: APP EXITING 00:39:09.724 INFO: killing all VMs 00:39:09.724 INFO: killing vhost app 00:39:09.724 INFO: EXIT DONE 00:39:10.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:10.291 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:39:10.291 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:39:10.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:11.117 Cleaning 00:39:11.117 Removing: /var/run/dpdk/spdk0/config 00:39:11.117 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:11.117 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:11.117 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:11.117 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:11.117 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:11.117 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:11.117 Removing: /var/run/dpdk/spdk1/config 00:39:11.117 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:11.117 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:11.117 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:11.117 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:11.117 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:11.117 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:11.117 Removing: /var/run/dpdk/spdk2/config 00:39:11.117 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:11.117 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:11.117 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:11.117 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:11.117 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:11.117 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:11.117 Removing: /var/run/dpdk/spdk3/config 00:39:11.117 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:11.117 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:11.117 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:11.117 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:11.117 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:11.117 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:11.117 Removing: /var/run/dpdk/spdk4/config 00:39:11.117 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:11.117 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:11.117 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:11.117 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:11.117 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:11.117 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:11.117 Removing: /dev/shm/nvmf_trace.0 00:39:11.117 Removing: /dev/shm/spdk_tgt_trace.pid71466 00:39:11.117 Removing: /var/run/dpdk/spdk0 00:39:11.117 Removing: /var/run/dpdk/spdk1 00:39:11.117 Removing: /var/run/dpdk/spdk2 00:39:11.117 Removing: /var/run/dpdk/spdk3 00:39:11.117 Removing: /var/run/dpdk/spdk4 00:39:11.117 Removing: /var/run/dpdk/spdk_pid100069 00:39:11.117 Removing: /var/run/dpdk/spdk_pid100584 00:39:11.117 Removing: /var/run/dpdk/spdk_pid100688 00:39:11.117 Removing: /var/run/dpdk/spdk_pid100847 00:39:11.117 Removing: /var/run/dpdk/spdk_pid100905 00:39:11.117 Removing: /var/run/dpdk/spdk_pid100959 00:39:11.117 Removing: /var/run/dpdk/spdk_pid101018 00:39:11.117 Removing: /var/run/dpdk/spdk_pid101208 00:39:11.117 Removing: /var/run/dpdk/spdk_pid101366 00:39:11.117 Removing: /var/run/dpdk/spdk_pid101673 00:39:11.117 Removing: /var/run/dpdk/spdk_pid101814 00:39:11.117 Removing: /var/run/dpdk/spdk_pid102076 00:39:11.117 Removing: /var/run/dpdk/spdk_pid102207 00:39:11.117 Removing: /var/run/dpdk/spdk_pid102342 00:39:11.117 Removing: /var/run/dpdk/spdk_pid102757 00:39:11.117 Removing: /var/run/dpdk/spdk_pid103218 00:39:11.117 Removing: /var/run/dpdk/spdk_pid103219 00:39:11.117 Removing: /var/run/dpdk/spdk_pid103221 00:39:11.117 Removing: /var/run/dpdk/spdk_pid103506 00:39:11.117 Removing: /var/run/dpdk/spdk_pid103800 00:39:11.117 Removing: /var/run/dpdk/spdk_pid103809 00:39:11.117 Removing: /var/run/dpdk/spdk_pid106228 00:39:11.117 Removing: /var/run/dpdk/spdk_pid106561 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107173 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107181 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107576 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107596 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107610 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107643 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107649 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107793 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107799 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107902 00:39:11.117 Removing: /var/run/dpdk/spdk_pid107905 00:39:11.117 Removing: /var/run/dpdk/spdk_pid108007 00:39:11.117 Removing: /var/run/dpdk/spdk_pid108016 00:39:11.376 Removing: /var/run/dpdk/spdk_pid108543 00:39:11.376 Removing: /var/run/dpdk/spdk_pid108586 00:39:11.376 Removing: /var/run/dpdk/spdk_pid108746 00:39:11.376 Removing: /var/run/dpdk/spdk_pid108872 00:39:11.376 Removing: /var/run/dpdk/spdk_pid109331 00:39:11.376 Removing: /var/run/dpdk/spdk_pid109590 00:39:11.376 Removing: /var/run/dpdk/spdk_pid110153 00:39:11.376 Removing: /var/run/dpdk/spdk_pid110786 00:39:11.376 Removing: /var/run/dpdk/spdk_pid112206 00:39:11.376 Removing: /var/run/dpdk/spdk_pid112867 00:39:11.376 Removing: /var/run/dpdk/spdk_pid112870 00:39:11.376 Removing: /var/run/dpdk/spdk_pid114959 00:39:11.376 Removing: /var/run/dpdk/spdk_pid115049 00:39:11.376 Removing: /var/run/dpdk/spdk_pid115141 00:39:11.376 Removing: /var/run/dpdk/spdk_pid115235 00:39:11.376 Removing: /var/run/dpdk/spdk_pid115395 00:39:11.376 Removing: /var/run/dpdk/spdk_pid115484 00:39:11.376 Removing: /var/run/dpdk/spdk_pid115570 00:39:11.376 Removing: /var/run/dpdk/spdk_pid115661 00:39:11.376 Removing: /var/run/dpdk/spdk_pid116050 00:39:11.376 Removing: /var/run/dpdk/spdk_pid116807 00:39:11.376 Removing: /var/run/dpdk/spdk_pid118189 00:39:11.376 Removing: /var/run/dpdk/spdk_pid118393 00:39:11.376 Removing: /var/run/dpdk/spdk_pid118690 00:39:11.376 Removing: /var/run/dpdk/spdk_pid119243 00:39:11.376 Removing: /var/run/dpdk/spdk_pid119638 00:39:11.376 Removing: /var/run/dpdk/spdk_pid122049 00:39:11.376 Removing: /var/run/dpdk/spdk_pid122090 00:39:11.376 Removing: /var/run/dpdk/spdk_pid122428 00:39:11.376 Removing: /var/run/dpdk/spdk_pid122478 00:39:11.376 Removing: /var/run/dpdk/spdk_pid122897 00:39:11.376 Removing: /var/run/dpdk/spdk_pid123479 00:39:11.376 Removing: /var/run/dpdk/spdk_pid123904 00:39:11.376 Removing: /var/run/dpdk/spdk_pid124963 00:39:11.376 Removing: /var/run/dpdk/spdk_pid126054 00:39:11.376 Removing: /var/run/dpdk/spdk_pid126167 00:39:11.376 Removing: /var/run/dpdk/spdk_pid126224 00:39:11.376 Removing: /var/run/dpdk/spdk_pid127836 00:39:11.376 Removing: /var/run/dpdk/spdk_pid128140 00:39:11.376 Removing: /var/run/dpdk/spdk_pid128467 00:39:11.376 Removing: /var/run/dpdk/spdk_pid129023 00:39:11.376 Removing: /var/run/dpdk/spdk_pid129028 00:39:11.376 Removing: /var/run/dpdk/spdk_pid129446 00:39:11.376 Removing: /var/run/dpdk/spdk_pid129597 00:39:11.376 Removing: /var/run/dpdk/spdk_pid129752 00:39:11.376 Removing: /var/run/dpdk/spdk_pid129846 00:39:11.376 Removing: /var/run/dpdk/spdk_pid129995 00:39:11.376 Removing: /var/run/dpdk/spdk_pid130100 00:39:11.376 Removing: /var/run/dpdk/spdk_pid130824 00:39:11.376 Removing: /var/run/dpdk/spdk_pid130860 00:39:11.376 Removing: /var/run/dpdk/spdk_pid130891 00:39:11.376 Removing: /var/run/dpdk/spdk_pid131141 00:39:11.376 Removing: /var/run/dpdk/spdk_pid131171 00:39:11.376 Removing: /var/run/dpdk/spdk_pid131205 00:39:11.376 Removing: /var/run/dpdk/spdk_pid131669 00:39:11.376 Removing: /var/run/dpdk/spdk_pid131700 00:39:11.376 Removing: /var/run/dpdk/spdk_pid132181 00:39:11.376 Removing: /var/run/dpdk/spdk_pid132350 00:39:11.376 Removing: /var/run/dpdk/spdk_pid132385 00:39:11.376 Removing: /var/run/dpdk/spdk_pid71313 00:39:11.376 Removing: /var/run/dpdk/spdk_pid71466 00:39:11.376 Removing: /var/run/dpdk/spdk_pid71741 00:39:11.376 Removing: /var/run/dpdk/spdk_pid71833 00:39:11.376 Removing: /var/run/dpdk/spdk_pid71878 00:39:11.376 Removing: /var/run/dpdk/spdk_pid71982 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72018 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72152 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72431 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72615 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72700 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72792 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72876 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72913 00:39:11.376 Removing: /var/run/dpdk/spdk_pid72944 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73014 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73137 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73759 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73816 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73875 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73895 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73974 00:39:11.376 Removing: /var/run/dpdk/spdk_pid73989 00:39:11.376 Removing: /var/run/dpdk/spdk_pid74064 00:39:11.376 Removing: /var/run/dpdk/spdk_pid74092 00:39:11.376 Removing: /var/run/dpdk/spdk_pid74149 00:39:11.635 Removing: /var/run/dpdk/spdk_pid74179 00:39:11.635 Removing: /var/run/dpdk/spdk_pid74225 00:39:11.635 Removing: /var/run/dpdk/spdk_pid74255 00:39:11.635 Removing: /var/run/dpdk/spdk_pid74415 00:39:11.635 Removing: /var/run/dpdk/spdk_pid74445 00:39:11.635 Removing: /var/run/dpdk/spdk_pid74533 00:39:11.635 Removing: /var/run/dpdk/spdk_pid75013 00:39:11.635 Removing: /var/run/dpdk/spdk_pid75408 00:39:11.635 Removing: /var/run/dpdk/spdk_pid77876 00:39:11.635 Removing: /var/run/dpdk/spdk_pid77922 00:39:11.635 Removing: /var/run/dpdk/spdk_pid78268 00:39:11.635 Removing: /var/run/dpdk/spdk_pid78314 00:39:11.635 Removing: /var/run/dpdk/spdk_pid78739 00:39:11.635 Removing: /var/run/dpdk/spdk_pid79306 00:39:11.635 Removing: /var/run/dpdk/spdk_pid79753 00:39:11.635 Removing: /var/run/dpdk/spdk_pid80787 00:39:11.635 Removing: /var/run/dpdk/spdk_pid81876 00:39:11.635 Removing: /var/run/dpdk/spdk_pid81993 00:39:11.635 Removing: /var/run/dpdk/spdk_pid82061 00:39:11.635 Removing: /var/run/dpdk/spdk_pid83700 00:39:11.635 Removing: /var/run/dpdk/spdk_pid84051 00:39:11.635 Removing: /var/run/dpdk/spdk_pid91343 00:39:11.635 Removing: /var/run/dpdk/spdk_pid91782 00:39:11.635 Removing: /var/run/dpdk/spdk_pid92447 00:39:11.635 Removing: /var/run/dpdk/spdk_pid92888 00:39:11.635 Removing: /var/run/dpdk/spdk_pid92890 00:39:11.635 Removing: /var/run/dpdk/spdk_pid92958 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93013 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93074 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93108 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93122 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93149 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93187 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93193 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93247 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93306 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93366 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93404 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93412 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93442 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93752 00:39:11.635 Removing: /var/run/dpdk/spdk_pid93912 00:39:11.635 Removing: /var/run/dpdk/spdk_pid94173 00:39:11.635 Clean 00:39:11.635 17:59:53 -- common/autotest_common.sh@1451 -- # return 0 00:39:11.635 17:59:53 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:39:11.635 17:59:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:11.635 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:39:11.635 17:59:53 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:39:11.635 17:59:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:11.635 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:39:11.893 17:59:53 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:11.893 17:59:53 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:39:11.893 17:59:53 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:39:11.893 17:59:53 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:39:11.893 17:59:53 -- spdk/autotest.sh@394 -- # hostname 00:39:11.894 17:59:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:39:12.151 geninfo: WARNING: invalid characters removed from testname! 00:39:44.225 18:00:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:46.816 18:00:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:49.348 18:00:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:52.634 18:00:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:55.940 18:00:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:59.228 18:00:40 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:02.516 18:00:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:02.775 18:00:44 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:40:02.775 18:00:44 -- common/autotest_common.sh@1681 -- $ lcov --version 00:40:02.775 18:00:44 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:40:02.775 18:00:44 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:40:02.775 18:00:44 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:40:02.775 18:00:44 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:40:02.775 18:00:44 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:40:02.775 18:00:44 -- scripts/common.sh@336 -- $ IFS=.-: 00:40:02.775 18:00:44 -- scripts/common.sh@336 -- $ read -ra ver1 00:40:02.775 18:00:44 -- scripts/common.sh@337 -- $ IFS=.-: 00:40:02.775 18:00:44 -- scripts/common.sh@337 -- $ read -ra ver2 00:40:02.775 18:00:44 -- scripts/common.sh@338 -- $ local 'op=<' 00:40:02.775 18:00:44 -- scripts/common.sh@340 -- $ ver1_l=2 00:40:02.775 18:00:44 -- scripts/common.sh@341 -- $ ver2_l=1 00:40:02.775 18:00:44 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:40:02.775 18:00:44 -- scripts/common.sh@344 -- $ case "$op" in 00:40:02.775 18:00:44 -- scripts/common.sh@345 -- $ : 1 00:40:02.775 18:00:44 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:40:02.775 18:00:44 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.775 18:00:44 -- scripts/common.sh@365 -- $ decimal 1 00:40:02.775 18:00:44 -- scripts/common.sh@353 -- $ local d=1 00:40:02.775 18:00:44 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:40:02.775 18:00:44 -- scripts/common.sh@355 -- $ echo 1 00:40:02.775 18:00:44 -- scripts/common.sh@365 -- $ ver1[v]=1 00:40:02.775 18:00:44 -- scripts/common.sh@366 -- $ decimal 2 00:40:02.775 18:00:44 -- scripts/common.sh@353 -- $ local d=2 00:40:02.775 18:00:44 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:40:02.775 18:00:44 -- scripts/common.sh@355 -- $ echo 2 00:40:02.775 18:00:44 -- scripts/common.sh@366 -- $ ver2[v]=2 00:40:02.775 18:00:44 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:40:02.775 18:00:44 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:40:02.775 18:00:44 -- scripts/common.sh@368 -- $ return 0 00:40:02.775 18:00:44 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.775 18:00:44 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:40:02.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.776 --rc genhtml_branch_coverage=1 00:40:02.776 --rc genhtml_function_coverage=1 00:40:02.776 --rc genhtml_legend=1 00:40:02.776 --rc geninfo_all_blocks=1 00:40:02.776 --rc geninfo_unexecuted_blocks=1 00:40:02.776 00:40:02.776 ' 00:40:02.776 18:00:44 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:40:02.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.776 --rc genhtml_branch_coverage=1 00:40:02.776 --rc genhtml_function_coverage=1 00:40:02.776 --rc genhtml_legend=1 00:40:02.776 --rc geninfo_all_blocks=1 00:40:02.776 --rc geninfo_unexecuted_blocks=1 00:40:02.776 00:40:02.776 ' 00:40:02.776 18:00:44 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:40:02.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.776 --rc genhtml_branch_coverage=1 00:40:02.776 --rc genhtml_function_coverage=1 00:40:02.776 --rc genhtml_legend=1 00:40:02.776 --rc geninfo_all_blocks=1 00:40:02.776 --rc geninfo_unexecuted_blocks=1 00:40:02.776 00:40:02.776 ' 00:40:02.776 18:00:44 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:40:02.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.776 --rc genhtml_branch_coverage=1 00:40:02.776 --rc genhtml_function_coverage=1 00:40:02.776 --rc genhtml_legend=1 00:40:02.776 --rc geninfo_all_blocks=1 00:40:02.776 --rc geninfo_unexecuted_blocks=1 00:40:02.776 00:40:02.776 ' 00:40:02.776 18:00:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:02.776 18:00:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:40:02.776 18:00:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:02.776 18:00:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.776 18:00:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.776 18:00:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.776 18:00:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.776 18:00:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.776 18:00:44 -- paths/export.sh@5 -- $ export PATH 00:40:02.776 18:00:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.776 18:00:44 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:40:02.776 18:00:44 -- common/autobuild_common.sh@479 -- $ date +%s 00:40:02.776 18:00:44 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732125644.XXXXXX 00:40:02.776 18:00:44 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732125644.7c45Oc 00:40:02.776 18:00:44 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:40:02.776 18:00:44 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:40:02.776 18:00:44 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:40:02.776 18:00:44 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:40:02.776 18:00:44 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:40:02.776 18:00:44 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:40:02.776 18:00:44 -- common/autobuild_common.sh@495 -- $ get_config_params 00:40:02.776 18:00:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:40:02.776 18:00:44 -- common/autotest_common.sh@10 -- $ set +x 00:40:02.776 18:00:44 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:40:02.776 18:00:44 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:40:02.776 18:00:44 -- pm/common@17 -- $ local monitor 00:40:02.776 18:00:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:02.776 18:00:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:02.776 18:00:44 -- pm/common@25 -- $ sleep 1 00:40:02.776 18:00:44 -- pm/common@21 -- $ date +%s 00:40:02.776 18:00:44 -- pm/common@21 -- $ date +%s 00:40:02.776 18:00:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732125644 00:40:02.776 18:00:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732125644 00:40:02.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732125644_collect-cpu-load.pm.log 00:40:02.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732125644_collect-vmstat.pm.log 00:40:03.713 18:00:45 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:40:03.713 18:00:45 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:40:03.713 18:00:45 -- spdk/autopackage.sh@14 -- $ timing_finish 00:40:03.713 18:00:45 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:03.713 18:00:45 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:03.713 18:00:45 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:03.973 18:00:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:03.973 18:00:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:03.973 18:00:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:03.973 18:00:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:03.973 18:00:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:40:03.973 18:00:45 -- pm/common@44 -- $ pid=134204 00:40:03.973 18:00:45 -- pm/common@50 -- $ kill -TERM 134204 00:40:03.973 18:00:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:03.973 18:00:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:40:03.973 18:00:45 -- pm/common@44 -- $ pid=134206 00:40:03.973 18:00:45 -- pm/common@50 -- $ kill -TERM 134206 00:40:03.973 + [[ -n 5994 ]] 00:40:03.973 + sudo kill 5994 00:40:03.982 [Pipeline] } 00:40:04.002 [Pipeline] // timeout 00:40:04.009 [Pipeline] } 00:40:04.026 [Pipeline] // stage 00:40:04.031 [Pipeline] } 00:40:04.046 [Pipeline] // catchError 00:40:04.056 [Pipeline] stage 00:40:04.059 [Pipeline] { (Stop VM) 00:40:04.072 [Pipeline] sh 00:40:04.352 + vagrant halt 00:40:08.536 ==> default: Halting domain... 00:40:13.817 [Pipeline] sh 00:40:14.097 + vagrant destroy -f 00:40:18.286 ==> default: Removing domain... 00:40:18.299 [Pipeline] sh 00:40:18.579 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:40:18.588 [Pipeline] } 00:40:18.603 [Pipeline] // stage 00:40:18.608 [Pipeline] } 00:40:18.624 [Pipeline] // dir 00:40:18.631 [Pipeline] } 00:40:18.646 [Pipeline] // wrap 00:40:18.652 [Pipeline] } 00:40:18.665 [Pipeline] // catchError 00:40:18.675 [Pipeline] stage 00:40:18.677 [Pipeline] { (Epilogue) 00:40:18.690 [Pipeline] sh 00:40:18.971 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:27.110 [Pipeline] catchError 00:40:27.112 [Pipeline] { 00:40:27.125 [Pipeline] sh 00:40:27.407 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:27.973 Artifacts sizes are good 00:40:27.982 [Pipeline] } 00:40:27.997 [Pipeline] // catchError 00:40:28.009 [Pipeline] archiveArtifacts 00:40:28.015 Archiving artifacts 00:40:28.182 [Pipeline] cleanWs 00:40:28.196 [WS-CLEANUP] Deleting project workspace... 00:40:28.196 [WS-CLEANUP] Deferred wipeout is used... 00:40:28.202 [WS-CLEANUP] done 00:40:28.204 [Pipeline] } 00:40:28.220 [Pipeline] // stage 00:40:28.226 [Pipeline] } 00:40:28.241 [Pipeline] // node 00:40:28.247 [Pipeline] End of Pipeline 00:40:28.288 Finished: SUCCESS